The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle

https://doi.org/10.1016/j.jesp.2014.01.005Get rights and content

Highlights

  • Anthropomorphism of a car predicts trust in that car.

  • Trust is reflected in behavioral, physiological, and self-report measures.

  • Anthropomorphism also affects attributions of responsibility/punishment.

  • These findings shed light on human interaction with autonomous vehicles.

Abstract

Sophisticated technology is increasingly replacing human minds to perform complicated tasks in domains ranging from medicine to education to transportation. We investigated an important theoretical determinant of people's willingness to trust such technology to perform competently—the extent to which a nonhuman agent is anthropomorphized with a humanlike mind—in a domain of practical importance, autonomous driving. Participants using a driving simulator drove either a normal car, an autonomous vehicle able to control steering and speed, or a comparable autonomous vehicle augmented with additional anthropomorphic features—name, gender, and voice. Behavioral, physiological, and self-report measures revealed that participants trusted that the vehicle would perform more competently as it acquired more anthropomorphic features. Technology appears better able to perform its intended design when it seems to have a humanlike mind. These results suggest meaningful consequences of humanizing technology, and also offer insights into the inverse process of objectifying humans.

Introduction

Technology is an increasingly common substitute for humanity. Sophisticated machines now perform tasks that once required a thoughtful human mind, from grading essays to diagnosing cancer to driving a car. As engineers overcome design barriers to creating such technology, important psychological barriers that users will face when using this technology emerge. Perhaps most important, will people be willing to trust competent technology to replace a human mind, such as a teacher's mind when grading essays, or a doctor's mind when diagnosing cancer, or their own mind when driving a car?

Our research tests one important theoretical determinant of trust in any nonhuman agent: anthropomorphism (Waytz, Cacioppo, & Epley, 2010). Anthropomorphism is a process of inductive inference whereby people attribute to nonhumans distinctively human characteristics, particularly the capacity for rational thought (agency) and conscious feeling (experience; Gray, Gray, & Wegner, 2007). Philosophical definitions of personhood focus on these mental capacities as essential to being human (Dennett, 1978, Locke, 1997). Furthermore, studies examining people's lay theories of humanness show that people define humanness in terms of emotions that implicate higher order mental process such as self-awareness and memory (e.g., humiliation, nostalgia; Leyens et al., 2000) and traits that involve cognition and emotion (e.g., analytic, insecure; Haslam, 2006). Anthropomorphizing a nonhuman does not simply involve attributing superficial human characteristics (e.g., a humanlike face or body) to it, but rather attributing essential human characteristics to the agent (namely a humanlike mind, capable of thinking and feeling).

Trust is a multifaceted concept that can refer to belief that another will behave with benevolence, integrity, predictability, or competence (McKnight & Chervany, 2001). Our prediction that anthropomorphism will increase trust centers on this last component of trust in another's competence (akin to confidence) (Siegrist et al., 2003, Twyman et al., 2008). Just as a patient would trust a thoughtful doctor to diagnose cancer more than a thoughtless one, or would rely on mindful cab driver to navigate through rush hour traffic more than a mindless cab driver, this conceptualization of anthropomorphism predicts that people would trust easily anthropomorphized technology to perform its intended function more than seemingly mindless technology. An autonomous vehicle (one that that drives itself) for instance, should seem better able to navigate through traffic when it seems able to think and sense its surroundings than when it seems to be simply mindless machinery. Or a “warbot” intended to kill should seem more lethal and sinister when it appears capable of thinking and planning than when it seems to be simply a computer mindlessly following an operator's instructions. The more technology seems to have humanlike mental capacities, the more people should trust it to perform its intended function competently, regardless of the valence of its intended function (Epley et al., 2006, Pierce et al., 2013).

This prediction builds on the common association between people's perceptions of others' mental states and of competent action. Because mindful agents appear capable of controlling their own actions, people judge others to be more responsible for successful actions they perform with conscious awareness, foresight, and planning (Cushman, 2008, Malle and Knobe, 1997) than for actions they perform mindlessly (see Alicke, 2000, Shaver, 1985, Weiner, 1995). Attributing a humanlike mind to a nonhuman agent should therefore more make the agent seem better able to control its own actions, and therefore better able to perform its intended functions competently. Our prediction also advances existing research on the consequences of anthropomorphism by articulating the psychological processes by which anthropomorphism could affect trust in technology (Nass & Moon, 2000), and by both experimentally manipulating anthropomorphism as well as measuring it as a critical mediator. Some experiments have manipulated the humanlike appearance of robots and assessed measures indirectly related to trust. However, such studies have not measured whether such superficial manipulations actually increase the attribution of essential humanlike qualities to that agent (the attribution we predict is critical for trust in technology; Hancock et al., 2011), and therefore cannot explain factors found ad-hoc to moderate the apparent effect of anthropomorphism on trust (Pak, Fink, Price, Bass, & Sturre, 2012). Another study found that individual differences in anthropomorphism predicted differences in willingness to trust technology in hypothetical scenarios (Waytz et al., 2010), but did not manipulate anthropomorphism experimentally. Our experiment is therefore the first to test our theoretical model of how anthropomorphism affects trust in technology.

We conducted our experiment in a domain of practical relevance: people's willingness to trust an autonomous vehicle. Autonomous vehicles—cars that control their own steering and speed—are expected to account for 75% of vehicles on the road by 2040 (Newcomb, 2012). Employing these autonomous features means surrendering personal control of the vehicle and trusting technology to drive safely. We manipulated the ease with which a vehicle, approximated by a driving simulator, could be anthropomorphized by merely giving it independent agency, or by also giving it a name, gender, and a human voice. We predicted that independent agency alone would make the car seem more mindful than a normal car, and that adding further anthropomorphic qualities would make the vehicle seem even more mindful. More important, we predicted that these relative increases in anthropomorphism would increase physiological, behavioral, and psychological measures of trust in the vehicle's ability to drive effectively.

Because anthropomorphism increases trust in the agent's ability to perform its job, we also predicted that increased anthropomorphism of an autonomous agent would mitigate blame for an agent's involvement in an undesirable outcome. To test this, we implemented a virtually unavoidable accident during the driving simulation in which participants were struck by an oncoming car, an accident clearly caused by the other driver. We implemented this to maintain experimental control over participants' experience because everyone in the autonomous vehicle conditions would get into the same accident, one clearly caused by the other driver. Indeed, when two people are potentially responsible for an outcome, the agent seen to be more competent tends to be credited for a success whereas the agent seen to be less competent tends to be blamed for a failure (Beckman, 1970, Wetzel, 1982). Because we predicted that anthropomorphism would increase trust in the vehicle's competence, we also predicted that it would reduce blame for an accident clear caused by another vehicle.

Section snippets

Method

One hundred participants (52 female, Mage = 26.39) completed this experiment using a National Advanced Driving Simulator. Once in the simulator, the experimenter attached physiological equipment to participants and randomly assigned them to condition: Normal, Agentic, or Anthropomorphic. Participants in the Normal condition drove the vehicle themselves, without autonomous features. Participants in the Agentic condition drove a vehicle capable of controlling its steering and speed (an “autonomous

General discussion

Technological advances blur the line between human and nonhuman, and this experiment suggests that blurring this line even further could increase users' willingness to trust technology in place of humans. Amongst those who drove an autonomous vehicle, those who drove a vehicle that was named, gendered, and voiced rated their vehicle as having more humanlike mental capacities than those who drove a vehicle with the same autonomous features but without anthropomorphic cues. In turn, those who

Acknowledgments

This research was funded by the University of Chicago's Booth School of Business and a grant from the General Motors Company. We thank Julia Hur for assistance with data coding.

References (37)

  • J.A. Dixon et al.

    Accents of guilt? Effects of regional accent, race, and crime type on attributions of guilt

    Journal of Language and Social Psychology

    (2002)
  • N. Epley et al.

    When perspective taking increases taking: Reactive egoism in social interaction

    Journal of Personality and Social Psychology

    (2006)
  • N. Epley et al.

    On seeing human: A three-factor theory of anthropomorphism

    Psychological Review

    (2007)
  • H. Giles et al.
    (1975)
  • H.M. Gray et al.

    Dimensions of mind perception

    Science

    (2007)
  • J. Hahn-Holbrook et al.

    Maternal defense: Breast feeding increases aggression by reducing stress

    Psychological Science

    (2011)
  • P.A. Hancock et al.

    A meta-analysis of factors affecting trust in human–robot interaction

    Human Factors: The Journal of the Human Factors and Ergonomics Society

    (2011)
  • N. Haslam

    Dehumanization: An integrative review

    Personality and Social Psychology Review

    (2006)
  • Cited by (722)

    View all citing articles on Scopus
    View full text