Sign up for your FREE personalized newsletter featuring insights, trends, and news for America's Active Baby Boomers

Newsletter
New

Shaping Minds: How First Impressions Drive Ai Adoption

Card image cap

Make-or-break moments. The first interaction with an AI system — whether it’s a website, landing page, or demo — shapes the mental model of the system. This, in turn, determines whether it will be adopted or not. Are these decisions driven by emotion or logic? Here’s how Technology Adoption Theory unpacks the mechanisms of technology acceptance, with insights applied to AI systems.

By Katie Metz, source

It takes only 10 seconds for someone to decide whether a website is worth their time or not. And while there is a wealth of resources on designing user-centered products, far fewer focus on how to communicate their value during those pivotal first moments — the initial touchpoints that shape a user’s mental model of the system. This challenge is especially true for AI systems, which are often technically complex and hard to simplify. I’ve seen talented teams build extraordinary, user-focused solutions, only to struggle with conveying their true value in a way that’s clear, engaging, and instantly meaningful.

This article delves into the psychological barriers to accepting and adopting new technologies, offering insights on how to highlight your product’s value and transform first impressions into lasting connections.

New tech, old habits

Have you heard of Khanmigo? It’s an AI-driven teaching assistant from Khan Academy, designed to guide students through their learning journey with engaging, conversational interactions. It’s empathetic, engaging, and patient. Make a mistake? No problem. It’ll gently explain what went wrong and how to fix it, creating a learning experience that feels less like being corrected and more like growing together. It’s a glimpse into how AI can reinvent old patterns, making interactions more personal, more flexible, and, dare I say, more human.

The image shows two interactive chat exchanges. On the left, a student says: “I still don’t understand, I’m about to give up.” The AI assistant responds, “Don’t worry! Let’s take a step back, can you see which two things are being multiplied in this expression? 1/2 (4a+1),” alongside a friendly cartoon face. On the right, student asks for help solving 9(x-4)=-18. AI assistant replies, “Absolutely! This is a basic algebraic equation. Let’s solve it together.”Source: Khanmigo

Of course, kids are a relatively easy audience for Khanmigo, as they are naturally open to such innovations. They don’t carry years of “learning fatigue,” forged by sitting through endless lectures and associating study time with boredom. AI meets them where they are, unspoiled and eager.

Now imagine a different scenario: a car equipped with AI that tracks your facial expressions and eyelid movements to detect when you’re too tired to drive safely. It suggests, perhaps with a subtle alarm, that you pull over for a rest. Tell that to my grandpa, though, and he’d probably chuckle at the idea that a camera could know better than he does when he needs a break. There will always be early adopters — those eager to embrace the new and exciting — and those who resist, for reasons that may be logical or deeply personal. For instance, some might worry that AI will take their job, while others may mistrust the technology purely because it feels unfamiliar or intrusive. Understanding and addressing these perspectives is the first step towards designing AI systems that can bridge the gap between skepticism and acceptance.

The good news? This isn’t a new challenge. Humanity has faced it during every industrial revolution, each time adapting its thinking to a new normal. While I won’t delve into all of these transformative eras — or the ongoing Fourth Industrial Revolution — I’d like to focus on the most recent completed one. Let’s rewind to the Third Industrial Revolution — the dawn of the computer and internet age in the late 20th century — and explore its key ideas of facilitating system adoption.

When computers met humanity

The 1980s marked a significant turning point in the study of technology adoption, spurred by the rapid rise of personal computers and the challenge of integrating these new tools into everyday life. Researchers quickly recognized the need to focus on factors like user involvement in the design and implementation of information systems. This emphasis acknowledged a simple truth: technology is only as effective as its ability to meet the needs of the people who use it.

A black-and-white photograph of a woman seated at a desk working on a vintage computer system. The setup includes a large CRT monitor displaying text, a typewriter-style keyboard with paper being fed through it, and additional documents or equipment on the desk. In the background, two men are standing and talking near a glass partition. The scene appears to be from an office or exhibition setting1983, source

On the practical side, industry practitioners concentrated on developing and refining system designs, aiming to make them more user-friendly and effective. My favorite example is research at Xerox PARC (Palo Alto Research Center), where researchers closely observed office workers’ behaviors and workflows. Their insights led to the creation of the desktop metaphor, introducing familiar concepts like files, folders, and a workspace that mirrored physical desks. This innovation revolutionized graphical user interfaces (GUIs), laying the foundation for systems like Apple’s Macintosh and Microsoft Windows. The Dream Machine by M. Mitchell Waldrop or Dealers of Lightning by Michael Hiltzik share more details about history and impact of Xerox PARC.

These parallel efforts — academic research and hands-on development — led to the creation of numerous theories and frameworks to better understand and guide technology adoption. Among these frameworks, the Technology Acceptance Model (TAM) stands out as one of the most influential.

Technology Acceptance Model

Back in 1986, Fred Davis created it to answer a simple but pivotal question: why do some people adopt new technology while others resist? TAM was designed to measure this adoption process by focusing on customer attitudes — specifically, whether the technology feels useful and easy to use. These two factors form the foundation of the model, offering a lens to understand how people decide to embrace (or avoid) new tools and systems.

The first factor, perceived usefulness — is how much a user believes the technology will improve their performance or productivity. It’s outcome-oriented, zeroing in on whether the tool helps users achieve their goals, complete tasks faster, or deliver better results.

The second factor of TAM is perceived ease of use — the belief that using the technology will be simple and free of unnecessary effort. While usefulness might get a user’s attention, ease of use determines whether they’ll stick with it. If a system feels complicated, clunky, or overly technical, even its benefits might not be enough to win users over. People naturally gravitate toward tools that feel intuitive.

The diagram consists of four rectangular boxes connected by arrows. The first two boxes, labeled “Perceived usefulness” and “Perceived ease of use,” are connected by dashed arrows to a central box labeled “Attitude toward using.” This central box is connected by a solid arrow to a box on the right labeled “Actual system use.” The entire diagram is enclosed in a section labeled “User motivation” with a purple header.Adapted from the Technology Acceptance Model (Davis, 1986), source

In 2000, Venkatesh and Davis expanded the original TAM model to dig deeper into what shapes Perceived Usefulness and people’s intentions to use technology. They introduced two key influences: social influence — how the opinions of others and societal norms impact adoption — and cognitive instrumental processes, which focus on how users mentally evaluate and connect with a system. Let’s unpack these factors and explore how they can help shape a mental model of an AI system that fosters adoption.

Perceived Usefulness

Perceived Usefulness doesn’t exist in a vacuum. One of the social factors is subjective norm, or the pressure we feel from others to use (or not use) a particular technology. This ties closely to image, the way adopting a tool might enhance someone’s status or reputation — think of design influencers after attending Config, dissecting the latest features and showcasing their expertise.

But subjective norm doesn’t impact everyone the same way. Experience can dull its influence. For those just starting with a new system, social pressure often holds more weight — unsure of their footing, they look to others for guidance. As they grow more comfortable, though, external opinions start to matter less, and their own evaluation takes over. Voluntariness also changes the game. When adoption is a choice, users are less swayed by others’ opinions. But when it’s required — whether by a workplace mandate or social obligation — subjective norm has a much stronger pull.

On the cognitive side, job relevance plays a big role. Users ask, Does this technology actually help me in my specific role? If the answer is no, it’s unlikely they’ll see it as useful. Similarly, output quality — whether the system delivers results that meet or exceed expectations — reinforces its value. Finally, there’s result demonstrability, or how clearly the benefits of the technology can be observed and communicated. The easier it is to see and measure the impact, the more likely users are to view it as useful.

The diagram consists of four rectangular boxes. Three boxes — “Perceived usefulness,” “Perceived ease of use,” and “Attitude toward using” — are grouped within a section labeled “User motivation,” highlighted with a purple header. The boxes “Perceived usefulness” and “Perceived ease of use” are connected by dashed arrows to “Attitude toward using.” A solid arrow connects “Attitude toward using” to the fourth box, “Actual system use,” which is outside the “User motivation” section.Adapted from Technology Acceptance Model (TAM 2) by Venkatesh and Davis, 2000. source

While product design can’t directly influence subjective norm, it often plays a role in shaping image — how people perceive themselves or imagine others will see them when they adopt the technology. It’s not so much about the product itself, but what using it says about the individual. By focusing on the right narrative from the very first touchpoint, some applications make it easy for users to see how adopting the tool reflects positively on them.

Take folk.app, for instance. Instead of just listing features, it focuses on solving specific pain points, framing the app as a tool for staying organized and professional. The messages feels personal and practical. For example, a section title like “Sales research, done for you” suggests that without any additional effort, users will have valuable insights at their fingertips. It’s not just about solving a problem; it’s about positioning the user as more prepared, professional, and efficient.

A screenshot of the Folk website, showing the “Research” tab in the features section. It is titled “Sales research, done for you,” with a call-to-action button  labeled “Explore 1-click enrichment.” Below the title, there is a short introductory text describing the feature.Folk.app, source

Braintrust takes a different angle. They highlight glowing media endorsements, signaling that the platform is widely recognised. It’s not just about saying that app works; it’s about creating a sense that using it puts you on the cutting edge, part of a forward-thinking community. This builds image, making users feel like adopting the technology aligns with innovation and success.

A screenshot of the Braintrust website featuring a photo on the left of a woman in glasses and a striped shirt working on a laptop in a café-like setting with a blurred background. On the right, quotes from notable publications highlight Braintrust’s impact, including its disruption of hiring processes, cost reduction for freelancers and clients, user control of the platform, and its popularity with over 24,000 people on a waiting list.Braintrust, source

Perceived Ease of Use

If perceived usefulness answers the question, “Will this technology help me?”, then perceived ease of use asks an equally important question: “Will it be easy to figure out?” Research shows that this perception is influenced by two main groups of factors — anchors and adjustments.

Anchors serve as the starting point for a user’s judgment of ease. They include internal traits and predispositions, such as computer self-efficacy — a user’s confidence in their ability to use technology — and perceptions of external control, or the belief that support and resources are available if needed. Another anchor is computer playfulness, which reflects a user’s natural tendency to explore and experiment with technology. This sense of curiosity can make systems feel more approachable, even when they’re complex. On the flip side, computer anxiety, or a fear of engaging with technology, can act as a barrier, making systems seem more difficult than they really are. When applying these principles to AI systems, we see a new form of apprehension emerging: AI anxiety.

Once users begin interacting with a system, adjustments come into play. Unlike anchors, which are deeply rooted in a user’s pre-existing traits and beliefs, adjustments are dynamic — they refine or reshape initial perceptions of ease of use based on real-world experience with the system.

One key adjustment is perceived enjoyment, which asks whether the act of using the system is inherently satisfying or even delightful. This concept is closely tied to User delight, where interactions go beyond pure functionality to create moments of joy or surprise. Have you ever searched for “cat” in Google and noticed a yellow button with a paw? That’s delight. It’s unexpected, playful, and entirely unnecessary for functionality — but it sticks with you.

Another adjustment is objective usability — the system’s actual performance as observed during use. Before interacting with the system, a user might assume it will be complex or difficult. But as they engage with the AI, accurate and intuitive responses can shift this perception, reinforcing the idea that the system is not only functional but easy to use.

A diagram with three grouped sections. The “Anchor” group includes “Computer self-efficacy,” “Perceptions of external control,” “Computer anxiety,” and “Computer playfulness,” with dashed arrows leading to “Perceived ease of use.” The “Adjustment” group contains “Perceived enjoyment” and “Objective usability,” also connected to “Perceived ease of use.” The “User motivation” group includes “Perceived usefulness,” “Perceived ease of use,” and “Attitude toward using,” linked to “Actual system use”Adapted from Technology Acceptance Model (TAM 3) by Venkatesh and Bala, 2008.

Computer self-efficacy — a user’s confidence in their ability to use technology — can’t be controlled directly, but it can definitely be nudged in the right direction. The secret lies in making the application feel approachable, so users believe they’re capable of mastering it.

One way to do this is by showcasing the experiences of others. Highlighting user reviews or testimonials isn’t just about marketing — it taps into the idea of Bandura’s Social Cognitive Theory. When people see others successfully using a tool, they start to think, “If they can handle it, why can’t I?” It’s not just about proof; it’s about planting the seed of possibility.

The image is a screenshot of the Contra website featuring testimonials from users. Adriano Reis, Barbiana Liu, and Aishwarya Agrawal share how Contra Pro has helped them earn $50K+ to $100K+, connect with clients, and streamline their workflow.Contra, source

Another approach is helping users form a mental map of how the technology works. GitBook, for example, pairs feature descriptions with skeleton-state interface snippets — clean, minimalist snapshots that give users just enough information to understand the basics without overwhelming them. Animations guide their focus, while interactive elements bring in a subtle gamification layer, making learning feel less like a chore and more like discovery. It’s user-centric design done right — a confidence boost, one step at a time.

A screenshot from the GitBook website showcasing the “Internal Docs” feature. The illustration displays a mockup of a “Company handbook” with floating pointers labeled “Engineer,” “Technical writer,” “Marketing,” and “Support.” On the right, the title reads “Better internal docs,” followed by text explaining how GitBook provides a flexible home for code docs, technical wikis, product plans, and more, with a Git-like workflow.GitBook, source

Slite provides an example of how the ‘job relevance’ factor can make a product introduction resonate right from the first page. One of the challenges in introducing a knowledge base is resistance to sharing information. Studies reveal that 60% of employees struggle to obtain critical information from colleagues, often due to a phenomenon known as ‘knowledge hiding’ — the deliberate withholding or concealing of information. This behavior stems from fears like losing status or job security, creating barriers to collaboration and productivity.

Slite tackles this challenge head-on with a playful, relatable touch, wrapping it in humor: ‘The knowledge base even [Name] from [one of 6 target industries] wants to use.’ This subtle nod to targeted pain points highlights its key differentiators: beautiful documentation, hassle-free adoption, and AI-powered search from day one, emphasizing perceived enjoyment— after all, who doesn’t love beautiful, effortless solutions?

It’s not just about functionality; it’s about creating a product so intuitive and engaging that it minimizes resistance and inspires adoption, transforming apprehension into enthusiasm.

A screenshot of the Slite website showcasing its knowledge base solution. The headline reads, “The knowledge base even Lee in HR wants to use,” with “Lee” and “HR” highlighted in blue and underlined. Below, a subheading states, “Skip the software learning curve: Slite delivers beautiful documentation, hassle-free adoption, and AI-powered search from day one.”Slite, source

Final thoughts

The Technology Acceptance Model, while valuable, is not a universal solution but rather a framework — a lens through which we can examine and interpret the dynamics of technology adoption. Since its introduction over a quarter-century ago, it has illuminated patterns in how users perceive and engage with technology. However, it can also risk being overly generalizable, glossing over the nuanced and context-specific factors that shape user behavior. Rooted in the psychological theories of reasoned action and planned behavior, TAM serves as a navigator — helping us better understand and adapt to the complexities of human affective reasoning. By recognizing its strengths and limitations, it can be used as a guide to create technology experiences truly resonate with the people they are designed to serve.

Additional resources:

Have ideas, thoughts, or experiences to share? Leave your insights in the comments!


Shaping minds: how first impressions drive AI adoption was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.


Recent