The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle
Introduction
Technology is an increasingly common substitute for humanity. Sophisticated machines now perform tasks that once required a thoughtful human mind, from grading essays to diagnosing cancer to driving a car. As engineers overcome design barriers to creating such technology, important psychological barriers that users will face when using this technology emerge. Perhaps most important, will people be willing to trust competent technology to replace a human mind, such as a teacher's mind when grading essays, or a doctor's mind when diagnosing cancer, or their own mind when driving a car?
Our research tests one important theoretical determinant of trust in any nonhuman agent: anthropomorphism (Waytz, Cacioppo, & Epley, 2010). Anthropomorphism is a process of inductive inference whereby people attribute to nonhumans distinctively human characteristics, particularly the capacity for rational thought (agency) and conscious feeling (experience; Gray, Gray, & Wegner, 2007). Philosophical definitions of personhood focus on these mental capacities as essential to being human (Dennett, 1978, Locke, 1997). Furthermore, studies examining people's lay theories of humanness show that people define humanness in terms of emotions that implicate higher order mental process such as self-awareness and memory (e.g., humiliation, nostalgia; Leyens et al., 2000) and traits that involve cognition and emotion (e.g., analytic, insecure; Haslam, 2006). Anthropomorphizing a nonhuman does not simply involve attributing superficial human characteristics (e.g., a humanlike face or body) to it, but rather attributing essential human characteristics to the agent (namely a humanlike mind, capable of thinking and feeling).
Trust is a multifaceted concept that can refer to belief that another will behave with benevolence, integrity, predictability, or competence (McKnight & Chervany, 2001). Our prediction that anthropomorphism will increase trust centers on this last component of trust in another's competence (akin to confidence) (Siegrist et al., 2003, Twyman et al., 2008). Just as a patient would trust a thoughtful doctor to diagnose cancer more than a thoughtless one, or would rely on mindful cab driver to navigate through rush hour traffic more than a mindless cab driver, this conceptualization of anthropomorphism predicts that people would trust easily anthropomorphized technology to perform its intended function more than seemingly mindless technology. An autonomous vehicle (one that that drives itself) for instance, should seem better able to navigate through traffic when it seems able to think and sense its surroundings than when it seems to be simply mindless machinery. Or a “warbot” intended to kill should seem more lethal and sinister when it appears capable of thinking and planning than when it seems to be simply a computer mindlessly following an operator's instructions. The more technology seems to have humanlike mental capacities, the more people should trust it to perform its intended function competently, regardless of the valence of its intended function (Epley et al., 2006, Pierce et al., 2013).
This prediction builds on the common association between people's perceptions of others' mental states and of competent action. Because mindful agents appear capable of controlling their own actions, people judge others to be more responsible for successful actions they perform with conscious awareness, foresight, and planning (Cushman, 2008, Malle and Knobe, 1997) than for actions they perform mindlessly (see Alicke, 2000, Shaver, 1985, Weiner, 1995). Attributing a humanlike mind to a nonhuman agent should therefore more make the agent seem better able to control its own actions, and therefore better able to perform its intended functions competently. Our prediction also advances existing research on the consequences of anthropomorphism by articulating the psychological processes by which anthropomorphism could affect trust in technology (Nass & Moon, 2000), and by both experimentally manipulating anthropomorphism as well as measuring it as a critical mediator. Some experiments have manipulated the humanlike appearance of robots and assessed measures indirectly related to trust. However, such studies have not measured whether such superficial manipulations actually increase the attribution of essential humanlike qualities to that agent (the attribution we predict is critical for trust in technology; Hancock et al., 2011), and therefore cannot explain factors found ad-hoc to moderate the apparent effect of anthropomorphism on trust (Pak, Fink, Price, Bass, & Sturre, 2012). Another study found that individual differences in anthropomorphism predicted differences in willingness to trust technology in hypothetical scenarios (Waytz et al., 2010), but did not manipulate anthropomorphism experimentally. Our experiment is therefore the first to test our theoretical model of how anthropomorphism affects trust in technology.
We conducted our experiment in a domain of practical relevance: people's willingness to trust an autonomous vehicle. Autonomous vehicles—cars that control their own steering and speed—are expected to account for 75% of vehicles on the road by 2040 (Newcomb, 2012). Employing these autonomous features means surrendering personal control of the vehicle and trusting technology to drive safely. We manipulated the ease with which a vehicle, approximated by a driving simulator, could be anthropomorphized by merely giving it independent agency, or by also giving it a name, gender, and a human voice. We predicted that independent agency alone would make the car seem more mindful than a normal car, and that adding further anthropomorphic qualities would make the vehicle seem even more mindful. More important, we predicted that these relative increases in anthropomorphism would increase physiological, behavioral, and psychological measures of trust in the vehicle's ability to drive effectively.
Because anthropomorphism increases trust in the agent's ability to perform its job, we also predicted that increased anthropomorphism of an autonomous agent would mitigate blame for an agent's involvement in an undesirable outcome. To test this, we implemented a virtually unavoidable accident during the driving simulation in which participants were struck by an oncoming car, an accident clearly caused by the other driver. We implemented this to maintain experimental control over participants' experience because everyone in the autonomous vehicle conditions would get into the same accident, one clearly caused by the other driver. Indeed, when two people are potentially responsible for an outcome, the agent seen to be more competent tends to be credited for a success whereas the agent seen to be less competent tends to be blamed for a failure (Beckman, 1970, Wetzel, 1982). Because we predicted that anthropomorphism would increase trust in the vehicle's competence, we also predicted that it would reduce blame for an accident clear caused by another vehicle.
Section snippets
Method
One hundred participants (52 female, Mage = 26.39) completed this experiment using a National Advanced Driving Simulator. Once in the simulator, the experimenter attached physiological equipment to participants and randomly assigned them to condition: Normal, Agentic, or Anthropomorphic. Participants in the Normal condition drove the vehicle themselves, without autonomous features. Participants in the Agentic condition drove a vehicle capable of controlling its steering and speed (an “autonomous
General discussion
Technological advances blur the line between human and nonhuman, and this experiment suggests that blurring this line even further could increase users' willingness to trust technology in place of humans. Amongst those who drove an autonomous vehicle, those who drove a vehicle that was named, gendered, and voiced rated their vehicle as having more humanlike mental capacities than those who drove a vehicle with the same autonomous features but without anthropomorphic cues. In turn, those who
Acknowledgments
This research was funded by the University of Chicago's Booth School of Business and a grant from the General Motors Company. We thank Julia Hur for assistance with data coding.
References (37)
Crime and punishment: Distinguishing the roles of causal and intentional analyses in moral judgment
Cognition
(2008)- et al.
Why don't we believe non-native speakers? The influence of accent on credibility
Journal of Experimental Social Psychology
(2010) - et al.
The folk concept of intentionality
Journal of Experimental Social Psychology
(1997) - et al.
Children's behavior toward and understanding of robotic and living dogs
Journal of Applied Developmental Psychology
(2009) - et al.
Driver safety and information from afar: An experimental driving simulator study of wireless vs. in-car information services
International Journal of Human Computer Studies
(2008) Culpable control and the psychology of blame
Psychological Bulletin
(2000)- et al.
Evaluational reactions to accented English speech
Journal of Abnormal and Social Psychology
(1962) Effects of students' performance on teachers' and observers' attributions of causality
Journal of Educational Psychology
(1970)- et al.
From agents to objects: Sexist attitudes and neural responses to sexualized targets
Journal of Cognitive Neuroscience
(2011) Brainstorms: Philosophical essays on mind and psychology
(1978)
Accents of guilt? Effects of regional accent, race, and crime type on attributions of guilt
Journal of Language and Social Psychology
When perspective taking increases taking: Reactive egoism in social interaction
Journal of Personality and Social Psychology
On seeing human: A three-factor theory of anthropomorphism
Psychological Review
Dimensions of mind perception
Science
Maternal defense: Breast feeding increases aggression by reducing stress
Psychological Science
A meta-analysis of factors affecting trust in human–robot interaction
Human Factors: The Journal of the Human Factors and Ergonomics Society
Dehumanization: An integrative review
Personality and Social Psychology Review
Cited by (722)
EEG-based assessment of driver trust in automated vehicles
2024, Expert Systems with ApplicationsVirtual humans as social actors: Investigating user perceptions of virtual humans’ emotional expression on social media
2024, Computers in Human BehaviorUsing voice recognition to measure trust during interactions with automated vehicles
2024, Applied ErgonomicsVirtually responsible? Attribution of responsibility toward human vs. virtual influencers and the mediating role of mind perception
2024, Journal of Retailing and Consumer ServicesResponsibility gaps and self-interest bias: People attribute moral responsibility to AI for their own but not others' transgressions
2024, Journal of Experimental Social Psychology