Exploring Social Loafing in Human-Computer Teams
How overtrust in automation can lead to reduced human effort
As robots and Artificial Intelligence (AI) become more prevalent in the workplace, there is increasing interest in understanding how humans interact and perform when working in teams with non human partners. Existing theories on human teamwork suggest that introducing robot teammates could lead to both benefits and potential detriments to team performance.
Previous research has shown that the presence of others can enhance individual effort and achievement through social facilitation effects (Bond & Titus, 1983). However, a phenomenon known as social loafing suggests that working collectively can also reduce individual motivation and effort compared to working alone (Latané, Williams & Harkins, 1979). This results in a group performing below its potential. Social loafing is a robust effect seen across many experimental studies and task types (Cymek & Manzey, 2022).
Origins of Social Loafing Research & Theories
The origins of social loafing research can be traced back to an experiment by French agricultural engineer Max Ringelmann in 1913. Ringelmann was interested in optimising group performance amongst agricultural workers. In a rope pulling study, he found that as more people were added to pull on a rope attached to a pressure gauge, performance increased but not to the extent expected based on individual capacities. Ringelmann identified two factors reducing collective output: coordination losses and motivation losses. Coordination losses refers to difficulties coordinating the efforts of multiple individuals in a group. For example, when pulling a rope together, it's hard to get everyone to pull at exactly the same time and exert force simultaneously. This lack of coordination means the group doesn't maximise its potential. Motivation losses refers to decreases in individual motivation when working in a group. Ringelmann suggested that when working collectively, individuals tend to rely on others to put in effort. This leads to each individual putting in less effort than they would if working alone.
Subsequent theories have further expanded on the motivation deficits underlying social loafing. The Social Impact Theory proposes that each person has less influence on individuals as a group gets bigger. For example, in a 2-person group, Person A comprises 50% of the social influence on Person B. But in a 10-person group, each member only exerts 10% as much influence. This diffusion of influence as group size grows means individuals feel less social pressure to exert effort.
The Social Compensation Theory states individuals reduce their effort to match expectations of how hard others in the group will work. So if Person A expects their teammates to put in a poor performance, Person A will reduce their own effort to match this low level. This leads to overall lower motivation and effort in the group.
Finally, the collective effort model (CEM) integrated various explanations into a model asserting that motivation depends on expectations for the group’s success and the value placed on the goal. Supporting the CEM, findings from a meta-analysis showed social loafing declined with increased task meaningfulness and individual identifiability (Karau & Williams, 1993). Factors including gender, culture and task complexity were also found to moderate social loafing, aligning with CEM predictions.
Social Loafing in Human - Computer Teams
Social loafing is more evident in tasks where the contribution of each group member is combined into a group outcome, making it difficult to identify the contribution of a single person. As robot or AI teammates have no intrinsic need for evaluation and do not place social pressure on humans, social loafing could be an even bigger concern in human-robot or human-AI teams compared to all-human teams. Initial studies have found hints of social loafing effects in human-robot teams, but objective performance declines have been elusive (Schmidtler et al., 2015; Onnasch & Panayotidis, 2020).
A new study by Cymek, Truckenbrodt and Onnasch (2023) directly tested for social loafing effects in a simulated industrial quality control task. Participants searched for defects on circuit boards, working either alone or in sequence after a robotic teammate.
The study used both objective and subjective measures of effort and performance. The task required focused visual search and detection of difficult-to-spot defects, providing a strong test of subtle differences in human motivation and diligence. Participants were told that their robotic teammate was extremely reliable, correctly identifying almost 95% of defects before the human began searching each board. This reliability was expected to reduce the human's perceived responsibility and lead to progressively lower effort over time.
Participants appeared to put in similar physical effort regardless of working alone or with the robot, uncovering comparable image areas at a steady pace. However, a key finding was that humans working after the robot missed significantly more defects in the critical final trial block.
On the small subset of boards where the robot made errors, humans working alone detected 80% of the defects on average compared to just 66% for humans working with the robot.
Subjective measures revealed a different story. All participants reported high and steady levels of effort and responsibility throughout the task. The objective performance discrepancy suggests participants may have exerted diligent visual search in both conditions while processing the image details less thoroughly when relying on the reliable robot.
What are the Implications?
These results provide initial evidence that social loafing can occur subtly in human-robot teams, even when participants report consistent effort and engagement. The findings suggest humans may rely excessively on dependable robotic teammates in ways that impair human attentiveness and performance over time.
The findings from this human-robot teaming study also have intriguing implications for relationships between humans and AI systems. Just as with the reliable robot, humans may tend to overtrust and become complacent with AI teammates over time. Disembodied AI lacks social pressure and accountability, which are conditions ripe for social loafing effects in human teams.
If humans view AI systems through a social lens, this could lead to biases like automation bias, overtrust of AI capabilities, and reduced human diligence. Social loafing may be even more likely with AI versus human teammates, as AI provides less social incentive for motivation but more research is needed.
Becoming aware of the existence of social loafing is step one towards mitigating its effects. A few other strategies we can use are the following:
Establish individual accountability for human team members through performance evaluations. This could involve monitoring and providing feedback on the distinct contributions made by each human, rather than just overall team output.
Periodically validate robot or AI performance to reduce missed errors and inaccuracies. This also helps us maintain accurate mental models of the computer systems and their limitations.
Design robots/AI to provide motivation and encouragement to human teammates in order to foster engagement and involvement. They could remind humans of the significance of tasks and boost morale.
Most importantly, we should conduct research to understand the human - computer relationship for each individual case in order to identify the most suitable mitigation strategies.
As such forms of technology become more commonplace, we should thoughtfully design the collaborative framework, leverage the complementary strengths of humans, and machines, and implement strategies from psychology literature to maximise motivation and performance. Testing different approaches in real-world settings is important to identify optimal solutions.