Why User Self-Efficacy Matters for AI Product Success
New Research on What Makes Users Trust GenAI Chatbots
As generative AI becomes increasingly prevalent in e-commerce and digital services, understanding the factors that drive user adoption has become a research priority. A new study published in Scientific Reports by Li, Zhou, Hu, and Liu (2025) provides compelling evidence that the answer lies in understanding how human-like features of AI systems influence user psychology through distinct cognitive pathways.
Background: The Rise of Human-Like AI
The integration of generative AI (GenAI) into consumer platforms represents a significant shift in how people interact with technology. Research has documented how major e-commerce platforms now deploy GenAI-powered chatbots to help users filter products, place orders, and resolve queries (Kramer, 2024). However, despite improvements in AI quality and availability, adoption remains inconsistent. Studies have identified barriers including information overload, technological anxiety, and concerns about system transparency as critical obstacles to user adoption and continued use (e.g., Gupta & Mukherjee, 2024).
This adoption gap has prompted researchers to investigate anthropomorphism — the attribution of human characteristics to non-human entities — as a potential bridge between AI capability and user acceptance. Studies have shown that making AI systems appear warm, empathetic, and emotionally responsive can significantly enhance human-computer interaction (Chakraborty, Kar, Patre, & Gupta, 2024). Previous research in social cognition has established that perceived warmth and perceived competence are two fundamental dimensions shaping the way user perceive personality (McKee, Bai, & Fiske, 2023), with warmth encompassing traits like friendliness and trustworthiness, while competence reflects intelligence, skill, and efficiency (Harris-Watson et al., 2023).
However, an important gap remained: the psychological mechanisms through which anthropomorphic traits influence user behaviour were unclear. To understand how anthropomorphic features actually influence user behaviour, Li et al. (2025) drew upon the Elaboration Likelihood Model (ELM), a theory about how people process persuasive information. The ELM was useful here because it explains that people evaluate information in two fundamentally different ways, depending on their motivation and cognitive capacity.
The ELM proposes two different routes:
Central route processing: People carefully think through information, analysing details and quality — this requires mental effort and focus.
Peripheral route processing: People rely on quick cues and gut feelings —shortcuts that don’t require much thinking.
Think of it this way: when a user is researching a major purchase with lots of time, they might read detailed reviews and compare specifications (central route). When they’re rushed or overwhelmed (e.g., if their phone unexpectedly breaks down), however, they might just go with “this looks trustworthy” or “other people seem to like it” (peripheral route).
The researchers applied this framework to understand whether different types of anthropomorphic features (warmth and empathy versus competence and intelligence) might work through these different mental pathways. If warmth and empathy serve as quick emotional cues (peripheral route), while competence and intelligence require more careful evaluation (central route), they might influence users differently, especially under varying conditions like information overload.
Previous studies had already applied this framework to AI contexts with promising results. Chen et al. (2025) found that features like recommendation accuracy and credibility work through the central route (requiring careful evaluation), while features like friendly tone and visual appeal work through the peripheral route (providing quick cues). Zhang et al. (2024) showed that both routes can effectively influence whether people adopt AI recommendations, but these studies hadn’t examined the underlying psychological mechanism connecting these features to adoption decisions.
Li et al. (2025) attempted to do this by considering self-efficacy — essentially, users’ confidence in their own ability to successfully use the AI system — as the missing link. This isn’t about whether the AI is good; it’s about whether users believe they can work with it effectively. Research has shown that self-efficacy plays a major role in whether people adopt new technologies (Ulfert-Blank & Schmidt, 2022). More recently, Bui and Duong (2024) found that when ChatGPT boosted people’s confidence in their abilities, they were more likely to intend to use it. The current study proposed that anthropomorphic features influence adoption through their effect on self-efficacy — first they make users feel more confident, and that confidence then drives adoption intention.
The study also incorporated information overload as a moderating factor. This addition was important because when users face overwhelming amounts of information during online shopping, their decision-making processes change (Sivathanu et al., 2023). Shin (2023) found that as tasks become more difficult, people tend to rely more on AI recommendations. The researchers wanted to understand whether information overload would strengthen or weaken the effects of different anthropomorphic features on user confidence.
The Current Study
Li et al. (2025) surveyed 306 users of Ali Xiaomi, a GenAI-powered shopping assistant on China’s Taobao e-commerce platform. Using actual users provided realistic insights into real-world AI adoption. The sample was balanced (52.3% male, 47.7% female, average age 33.3 years).
The researchers measured six key variables using validated scales, with participants rating statements on 7-point scales:
Peripheral cues: Human-like empathy and perceived warmth (emotional, intuitive features)
Central cues: Perceived competence and perceptual intelligence (analytical, capability-focused features)
Mediating variable: Self-efficacy (users’ confidence in their ability to use the chatbot)
Moderating variable: Information overload (feeling overwhelmed by too much information)
Outcome: Adoption intention (willingness to use AI recommendations as a decision aid)
Using structural equation modelling, the researchers tested whether anthropomorphic features would enhance self-efficacy, whether self-efficacy would predict adoption intention, and whether information overload would moderate these relationships.
The results revealed clear patterns about which human-like features matter for adoption. Out of 13 hypotheses, 9 were supported while 4 were not:
What boosts user confidence: Three anthropomorphic features significantly enhanced self-efficacy—human-like empathy, perceived warmth, and perceived competence. Notably, warmth had the strongest effect. However, perceived intelligence had no significant effect, suggesting that making AI seem super intelligent doesn’t boost users’ confidence in their ability to use it.
Confidence drives adoption: Self-efficacy strongly predicted adoption intention, explaining 38.6% of variance. Importantly, anthropomorphic features worked through self-efficacy rather than directly influencing adoption—they first boosted confidence, which then led to adoption intention. This mediation was significant for empathy, warmth, and competence, but not for intelligence.
Information overload amplifies emotional cues: When users felt overwhelmed by information, the effects of empathy and warmth on self-efficacy became stronger. However, information overload didn’t affect the influence of competence or intelligence, suggesting emotional support becomes disproportionately valuable under cognitive stress while technical capabilities maintain steady influence.
What This Means for Understanding AI-Human Interaction
These findings advance our understanding in three key ways:
The ELM explains AI adoption patterns: Building on Chen et al. (2025), this study shows that warmth and empathy work as peripheral cues (emotional shortcuts) while competence works as a central cue (requiring deliberate evaluation). Both lead to adoption but through different pathways and with different sensitivities to context.
User confidence is the missing link: It’s not enough for AI to be good — users need to feel they can work with it successfully. This shifts design focus from showcasing AI capabilities to building user capabilities. Importantly, not all human-like features build confidence; intelligence can backfire.
Information overload’s effects are nuanced: Rather than being purely negative, information overload activates emotional cues as helpful shortcuts. This reframes overload as a contextual factor that changes which design strategies work best.
Recommendations for UX Professionals
These findings can translate into the following design principles:
1. Design for user confidence, not AI sophistication
Users adopt systems when they feel confident in their own abilities, not when they’re awed by AI capabilities.
Action items:
Focus on making users feel competent and in control
Explain how AI works in simple terms — avoid “black box” perceptions
Help users gradually build skills through progressive challenges
Reveal advanced features gradually (progressive disclosure)
Test by asking: “Does this make users feel smarter or inadequate?”
2. Prioritise warmth and empathy, especially under stress
Warmth had the strongest effect on confidence, particularly under information overload.
Action items:
Use language that acknowledges user emotions and challenges
Respond empathetically to confusion or frustration
Design confirmations that validate user decisions and reinforce competence
Test different tones to find the right balance
Increase emotional support when detecting user struggle (repeated queries, backtracking)
3. Show competence through clarity, not complexity
Competence helps when demonstrated through understandable performance. Intelligence can backfire.
Action items:
Demonstrate reliability in specific, concrete ways
Explain why AI made recommendations using simple reasoning
Emphasise consistent performance over impressive capabilities
Frame competence around user goals, not technical sophistication
Avoid language suggesting AI “thinks” in human-like ways
Test whether competence signals make users feel “this will help me” or “this is beyond me”
4. Adapt design to information overload
Information overload amplifies warmth and empathy effects.
Action items:
Detect overwhelm signs (many comparisons, repeated queries) and increase emotional support
Provide calming moments with reassuring communication
Reduce cognitive load through clear hierarchy and progressive disclosure
Adjust tone based on context—more empathetic during complex tasks
Offer both detailed paths and simplified paths with more support
5. Build confidence into every interaction
Deliberately design to enhance self-efficacy.
Action items:
Onboard with progressively challenging tasks that build mastery
Celebrate user achievements, not just AI performance
Attribute success to user skill with AI assistance
Let users refine AI behaviour, emphasising their control
Track confidence indicators, not just task completion
6. Balance transparency with accessibility
Explanations should enhance confidence, not undermine it (Shin, 2025).
Action items:
Layer explanations: simple summaries by default, details on demand
Use analogies connecting AI processes to familiar concepts
Focus transparency on aspects users can understand and control
Test whether explanations increase both understanding and confidence
Accept that “I trust it helps me” may be sufficient for some users
Conclusion
This research reveals that users adopt AI systems not when they’re impressed by the technology, but when they’re confident in themselves. The pathway runs through user self-efficacy — the belief that “I can successfully work with this system.”
Different human-like features work differently:
Warmth and empathy are emotional shortcuts that become especially powerful when users feel overwhelmed
Competence matters when demonstrated through reliable, understandable performance
Raw intelligence can backfire if it makes AI seem beyond user understanding (the “uncanny valley of mind”)
Information overload amplifies emotional cues but doesn’t affect analytical cues
For UX professionals and team building AI products, it’s important to design for user confidence. Focus on making users feel capable, supported, and in control. Demonstrate competence through clarity rather than sophistication. The future of AI adoption may depend less on how smart we can make our systems appear, and more on how confident we can make users feel in their ability to work with them.