Almost human: Anthropomorphism increases trust resilience in cognitive agents

J Exp Psychol Appl. 2016 Sep;22(3):331-49. doi: 10.1037/xap0000092. Epub 2016 Aug 8.

Abstract

We interact daily with computers that appear and behave like humans. Some researchers propose that people apply the same social norms to computers as they do to humans, suggesting that social psychological knowledge can be applied to our interactions with computers. In contrast, theories of human–automation interaction postulate that humans respond to machines in unique and specific ways. We believe that anthropomorphism—the degree to which an agent exhibits human characteristics—is the critical variable that may resolve this apparent contradiction across the formation, violation, and repair stages of trust. Three experiments were designed to examine these opposing viewpoints by varying the appearance and behavior of automated agents. Participants received advice that deteriorated gradually in reliability from a computer, avatar, or human agent. Our results showed (a) that anthropomorphic agents were associated with greater trust resilience, a higher resistance to breakdowns in trust; (b) that these effects were magnified by greater uncertainty; and c) that incorporating human-like trust repair behavior largely erased differences between the agents. Automation anthropomorphism is therefore a critical variable that should be carefully incorporated into any general theory of human–agent trust as well as novel automation design.

Publication types

  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Adolescent
  • Adult
  • Artificial Intelligence*
  • Automation
  • Cognition*
  • Computers
  • Female
  • Humans
  • Male
  • Trust*
  • User-Computer Interface*
  • Young Adult