Does terminology affect how we perceive and evaluate intelligent systems?
AI, algorithms, automated systems, bots, sophisticated statistical models… what’s in a name?
Intelligent systems are defined as “technologically advanced machines that perceive and respond to the world around them”. They can take many forms, from automated vacuums (e.g., Roomba) to facial recognition software to Netflix’s personalized suggestions. When we discuss intelligent systems, we can use a variety of terms to refer to them. Langer and colleagues recently reviewed a number of sources including newspaper articles, policy-making documents, and academic research, and identified the following terms being used to refer to the same systems: intelligent systems, algorithms, robotic systems, artificial intelligence, AI technologies, AI systems, robots, automated means, computer program, machine learning, sophisticated statistical model.
Even though all of the aforementioned terms refer to a similar idea — a system that interacts with humans — research suggests that differences in terminology can influence how people perceive (e.g., perceived complexity), evaluate the system (e.g., trustworthiness) in question, and their behaviour towards it. For example, people have different mental models when thinking of “AI” or “algorithms”, but both terms are often used to describe the same system.
Langer and colleagues conducted two experiments to test whether terminology matters and if different terminology can have distinct effects in communication about intelligent systems. In their first study, they explored how the use of different terms affects people’s perceptions of the properties of the respective systems. In the second study, they examined the effect different terminology has on the participant evaluations of fairness and trust in intelligent systems.
The first study found that the basic properties people associate with the different terms can strongly differ. In particular:
The terms computers and robots were perceived as more tangible than artificial intelligence. In contrast, decision support systems, machine learning, sophisticated statistical models, algorithms, and technical system were perceived as comparably less tangible.
The term artificial intelligence was perceived to refer to a system with higher complexity compared to decision support systems, automated systems, algorithms, computers, and technical systems.
Computers, robots, and computer programs were perceived as more controllable than artificial intelligence.
AI was more likely to be associated with humanlike characteristics than the other terms included in the study.
Participants associated relatively high machine competence with artificial intelligence, computers, and computer programs whereas they perceived especially less machine competence for decision support systems and sophisticated statistical models.
Terminological differences, however, did not affect evaluations of the ability of intelligent systems to conduct different tasks in comparison to humans. The way the study participants felt about technology influenced their attitudes towards intelligent systems. More specifically, people with a higher affinity for technology perceived intelligent systems to be more tangible, less complex, more controllable, found them to be more familiar and competent, and were less likely to attribute human-like characteristics to them.
The second study showed that just changing the term used to describe a system (e.g., AI instead of algorithm) can lead to differences in ratings of perceived fairness and trust. In particular, the terms algorithm and sophisticated statistical model led to, overall, better fairness evaluations compared to the term artificial intelligence. Additionally, the terms algorithm, robot, and sophisticated statistical model led to, overall, higher trust compared to the term artificial intelligence. The effect the terminology had on the participants’ evaluations depended on the task for which a system is used (e.g., work assignment, work evaluation).
The research discussed here by Langer et al. did not examine whether different perceptions and evaluations of intelligent systems can also lead to different behaviour in the way people interact with intelligent systems. For example, are people more likely to use and rely on a system described as being a sophisticated statistical model versus an artificial intelligence? Previous research, however, has shown that ratings of fairness and trust can predict actual system use.
Why is this important?
According to Langer and colleagues “terminology matters when describing intelligent systems to participants in research studies because it can affect the robustness and replicability of research results, and terminology matters because it may shape perceptions and evaluations of intelligent systems in communication about such systems (e.g., in public discourse and policy-making)”. It is important to be aware that choosing the terms to describe intelligent systems can have unintended consequences (e.g., affect the adoption rates of a particular system) but that terminology can also be used strategically (e.g., we can refer to a system as artificial intelligence to make it sound complex and novel). When we communicate with someone about intelligent systems describing this system with different terminology can have an effect on listeners’ perceptions and evaluations about the system.
When you design a product you can use terminology intentionally to affect the user attitudes towards it
For example, if we are creating a product using intelligent systems in recruitment the terminology we choose is likely to affect the way the product is evaluated by prospective users. Using the term artificial intelligence might lead to a less favorable evaluation of using intelligent systems to evaluate people and make decisions over people’s careers compared to the term algorithm. As a result, the choice for or against a term can be a strategic one.
Terminology can be used intentionally to engage people to contribute to discussions or affect their attitudes towards a product. Furthermore, terminology can also be used as a selling argument for companies that use intelligent systems. For example, certain companies claim to use artificial intelligence in their products but actually don’t. This could be an attempt to impress potential customers due to the perceived complexity as well as the strong potential that people associate with artificial intelligence. Terminology can clearly also be used strategically in order to cause desired effects (e.g., engagement, interest).