Striking the Right Balance: Anthropomorphism in AI Assistants
What are the implications for UX?
The concept of anthropomorphism, or attributing human traits and behaviours to non-human entities, has become increasingly relevant as artificial intelligence (AI) proliferates.
People tend to anthropomorphise AI and think of it as having human-like social capabilities. Past research has demonstrated that incorporating anthropomorphic design elements can improve user experience and satisfaction with AI systems. For example, studies have shown that humanlike voices and natural conversational abilities in chatbots lead to more positive user perceptions and emotional connections (Edwards et al., 2019; Nass & Moon, 2000). Other work has revealed preferences for embodied robots and virtual agents that exhibit humanlike facial expressions, emotions, and social cues (Liu & Sundar, 2018; Yam et al., 2021). This is similar to how people form imagined relationships with TV characters, known as "parasocial relationships." Just as viewers feel connected to TV characters, users can feel bonded with an anthropomorphic AI and want to keep using it. The more human-like the AI seems, the more likely people are to feel this social attraction and bond with it.
However, anthropomorphism is a complex, multidimensional phenomenon. Not all humanlike cues elicit positive reactions, and some may even have unintended consequences. The uncanny valley hypothesis, for example, proposes that near-human resemblance can evoke unease or revulsion at a certain point, when subtle imperfections become noticeable (Mori et al., 2012). Furthermore, factors like privacy risks may come into play with humanised AI. As a result, the effects of anthropomorphic design require further investigation.
Xie et al. (2023) recently conducted two experiments examining how different anthropomorphic cues influence user satisfaction with smart home assistants like Amazon Echo and Google Home. In particular, they focused on voice-based smart home assistants and conceptualised four key dimensions of anthropomorphism:
Visual cues: Humanlike appearance, face, eyes
Identity cues: Humanlike name or identity
Emotional cues: Expression of humour, empathy
Auditory cues: Humanlike voice
The authors hypothesised that these cues would influence satisfaction through two pathways with opposing effects. Anthropomorphic cues were expected to increase intimacy, improving satisfaction. At the same time they were also expected to amplify privacy concerns, reducing satisfaction.
In the first study that was conducted with over 400 participants, Xie et al. created scenarios where users interacted with smart home assistants exhibiting different combinations of anthropomorphic cues. They found that emotional cues (humour) and humanlike auditory cues (e.g., humanlike voice) directly increased user satisfaction. However, visual and identity cues did not directly affect satisfaction. Interestingly, visual humanlikeness did indirectly boost satisfaction by making interactions feel more intimate.
Another notable finding was that emotional and auditory cues influenced privacy perceptions in opposite ways. The addition of humour actually heightened privacy concerns, while humanlike voices alleviated them. Overall, increased intimacy from anthropomorphic cues had a stronger positive impact than heightened privacy risks.
The second study replicated these results with over 800 participants and also showed that emotional and auditory cues were even more impactful when people were using the assistants for fun entertainment purposes (e.g., playing music) compared to practical tasks, such as finding information. The conversational cues make the interactions more fun and engaging when amusement is the goal rather than just completing a practical task efficiently.
Implications for UX Professionals
The research shows that anthropomorphic cues - making an AI seem human-like - have complicated, nuanced effects on users. There isn't a simple formula where "more human-like = better."
The research reveals that certain anthropomorphic cues can improve user experience when interacting with AI, while others may have no effect or even backfire.
Specifically, emotional cues like humour tend to increase user satisfaction and strengthen connections to the AI, despite slightly heightening privacy concerns. Humour makes the interactions more fun and engaging. This effect is strongest when the AI is used for entertainment rather than utilitarian purposes.
Humanlike auditory cues also boost satisfaction while mitigating privacy risks. Natural-sounding voices lead to higher perceived anthropomorphism. However, perfectly mimicking human speech could risk dipping into the "uncanny valley" if not done carefully and subtly.
In contrast, visual humanlikeness does not appear to directly impact satisfaction. Overly humanlike visuals may in fact contribute to unease due to the uncanny valley effect.
Identity cues like human names seem insignificant for voice-only interfaces. Human names do not affect user satisfaction or feelings of intimacy based on current findings.
When designing smart home assistants for entertainment purposes, emotional and auditory anthropomorphism can be emphasised to heighten enjoyment. Anthropomorphic cues, however, are less important or even counterproductive for utilitarian assistants focused on tasks like information lookup. The principles likely extend beyond smart home assistants to other human-AI interfaces, but more research is needed to further validate this.
These insights can help us create a framework to help us decide when to incorporate anthropomorphism thoughtfully based on the AI’s purpose and interaction modalities. Like always, user research is always needed to test the effectiveness of the designs, but existing research can give us a head start.
If you’re interested in AI and UX, check out the latest episode of the podcast I’m co-presenting, UX Guide to the Galaxy!
I read this with great interest, thank you.
Human-like cues also separates The whole idea of communicating to an artificial intelligence