Mind Over Matter: The Placebo Effect in Artificial Intelligence and User Experience
Exploring the Unseen Influence of Expectations on AI Interactions
The placebo effect, traditionally associated with the field of medicine, has surprising parallels in the world of artificial intelligence (AI) and user experience (UX). As AI systems become increasingly integrated into our daily lives, a growing body of research sheds light on how users' expectations can profoundly shape their experiences and interactions with these technologies – a phenomenon known as the AI placebo effect.
Understanding the Placebo Effect
The placebo effect is a fascinating phenomenon in which a person's expectations of a treatment's benefits can result in real, perceived improvements in their condition, despite receiving a treatment with no therapeutic value (Beecher, 1955). Its origins are in medical research and it demonstrates the power of the mind in influencing physical and psychological outcomes based on belief and expectation. In particular, it shows the importance of expectations in physiological and psychological response1.
The effect is often used in medicine for symptom relief (e.g. pain relief) and has been extensively validated in both clinical and psychological research since the 1950s. Beyond treating ailments, placebo treatments have been shown to enhance performance and objective outcomes, although most consistently in subjective improvement evaluations. Placebo effects are so strong than even when people become aware that a treatment is a sham, we still experience positive effects.
The Placebo Effect in HCI
Research reveals that the anticipation of enhancement can induce placebo effects beyond medicine, observed in contexts ranging from music to medical devices intended to boost performance. For example, it’s been shown to enhance creativity, cognitive abilities, and athletic performance.
A growing body of research demonstrates how the placebo effect can also manifest in the context of human-computer interaction (HCI). These studies show that users' anticipations and beliefs about a technology can significantly influence their experiences, behaviours, and even cognitive processing - independent of the system's actual functionality.
One study by Villa et al. (2023) investigated the placebo effect of "augmentation technologies" on risk-taking behaviour. Risk-taking was chosen as the measure because increased willingness to take risks could indicate that participants believed their cognitive functions were enhanced by the placebo technology. The researchers designed the study around a sham (fake) brain-computer interface that claimed to boost cognitive performance through inaudible sounds. In reality, no augmentation occurred - the interface was non-functional, serving as a placebo condition.
The results showed that participants exhibited a placebo effect through increased risk-taking after being exposed to the sham brain-computer interface. This suggests their mere belief in cognitive augmentation led to changes in risk-taking behavior, even though no actual enhancement took place. Additionally, brain imaging (EEG) data revealed differences in how participants processed information, particularly related to losses, under the placebo condition compared to control. This implies the placebo effect can significantly influence not just behavioural outputs but also the underlying neural processes involved in cognition.
For HCI research, this underscores the potential for technology and user interface designs to leverage or mitigate the placebo effect through careful management of user expectations.
The Placebo Effect in AI
The AI boom we have been experiencing the last few years has prompted researchers to conduct research to understand how much a person's expectations about AI can affect the way they interact with it, revealing a strong placebo effect.
A study by Kosch et al. (2023) looked at how expectations about an AI system can impact users' trust and their likelihood of following its advice. Participants were tasked with solving word puzzles, with some being told they would receive assistance from an "adaptive AI interface" that adjusted puzzle difficulty. However, no AI was actually provided - all participants experienced the same unaltered puzzle set and difficulty levels.
The results showed that when users believed that they received help from an AI their expectations regarding their own performance increased, even after the interaction ended. In addition to this, the expectations were positively correlated with performance — users who expected to receive help from a non existent AI ended up solving more word puzzles. This was seen by the authors as evidence that system descriptions can elicit placebo effects by affecting user expectations.
Another study by Pataranutaporn et al. (2023) looked at the way users interact with a chatbot based on OpenAI’s generative AI model ChatGPT-3. Participants were told they will interact with a mental health chatbot and they were given a description about it before their interaction. Participants were assigned to three conditions; they were told that the bot was caring, manipulative or neither and had no motive. After using the chatbot, participants who were told the chatbot was caring, perceived it as such. They also found it to be more trustworthy, empathetic and effective than participants primed to believe it was neutral or manipulative. Interestingly, users expectations didn’t just affect their perceptions of the AI but also influenced their interaction with it and the conversation they had with the chatbot. In particular, in the caring AI condition participants had mostly positive conversations, whereas in the manipulative AI conversations were more likely to be negative. This suggests user expectations don't just shape how an AI is perceived, but can fundamentally alter the nature of human-AI interactions in a manner consistent with those expectations, even when the outputs of the AI are identical. The placebo effect appears to create a type of confirmatory bias during the interaction.
User expectations don't just shape how an AI is perceived, but can fundamentally alter the nature of human-AI interactions in a manner consistent with those expectations
A more recent study by Kloft et al. (2024) provided further evidence that the placebo effect shapes how we perceive and interact with AI. The researchers used a mixed-design lab study and aimed to determine how users' expectations, shaped by positive or negative descriptions of AI, influence their performance on tasks where AI's involvement is a sham—meaning the AI doesn't actually contribute to the task's outcome.
The experiment involved a letter discrimination task, chosen for its ability to model simple decision-making processes. Participants were told they would be working with an AI system designed to either improve or impair their performance by adjusting the interface. In reality, no AI was present in any condition—a fact unknown to the participants.
The study involved 66 participants with varying levels of AI literacy to minimise bias in the placebo effect based on preconceived notions about AI. Kloft et al. discovered that participants' performance expectations and actual task performance were influenced by their belief in the AI's involvement, regardless of whether the AI was described positively or negatively. This indicates a strong placebo effect where the mere expectation of AI assistance can enhance user performance. Interestingly, even negative descriptions of AI (suggesting impairment in performance) did not alter the participants' positive biases towards AI effectiveness.
The study also revealed that participants processed information faster and altered their response style when they believed AI was assisting them. This effect was consistent despite variations in AI descriptions, highlighting a pervasive "AI performance bias."
Implications for UX Professionals
For UX professionals, these insights emphasise the importance of managing user expectations in the design and deployment of AI systems. Clear communication about an AI system's capabilities and limitations is crucial to aligning user expectations with reality, potentially enhancing user satisfaction and trust in the technology. For example, it might be safer to portray these systems are less capable. This, alone, however, cannot completely account for the placebo effect as shown in the latest study by Kloft et al.
Being aware and understanding the placebo effect in HCI can guide ethical considerations in design, ensuring that user experiences are genuinely beneficial and not merely the result of inflated expectations. We should consider the psychological impact of AI descriptions in designs and conduct rigorous user testing to understand how expectations may influence interaction quality.
In addition to this, user studies of AI and other interactive HCI systems should incorporate controls for placebo effects to ensure that observed outcomes are truly attributable to the technology's functionalities and not to user expectations.
Conclusion
The placebo effect, while traditionally associated with medical research, holds significant implications for HCI and UX, particularly in the context of AI. More research is needed to understand how to better control this effect. A first step is acknowledging and thoughtfully managing user expectations, so that we can create more effective, satisfying, and ethical user experiences.
Note: Thanks for supporting UX Psychology. It means a lot to see people following and sharing my work. My aim is to keep my weekly articles free forever but I’m also planning to release one extra each month as a thank you to paid subscribers — starting this month!
there are more explanations for the placebo effect (e.g., perceived control, classical conditioning) but they’re beyond the scope of this article.