Trust in Artificial Intelligence: Meta-Analytic Findings

Hum Factors. 2023 Mar;65(2):337-359. doi: 10.1177/00187208211013988. Epub 2021 May 28.

Abstract

Objective: The present meta-analysis sought to determine significant factors that predict trust in artificial intelligence (AI). Such factors were divided into those relating to (a) the human trustor, (b) the AI trustee, and (c) the shared context of their interaction.

Background: There are many factors influencing trust in robots, automation, and technology in general, and there have been several meta-analytic attempts to understand the antecedents of trust in these areas. However, no targeted meta-analysis has been performed examining the antecedents of trust in AI.

Method: Data from 65 articles examined the three predicted categories, as well as the subcategories of human characteristics and abilities, AI performance and attributes, and contextual tasking. Lastly, four common uses for AI (i.e., chatbots, robots, automated vehicles, and nonembodied, plain algorithms) were examined as further potential moderating factors.

Results: Results showed that all of the examined categories were significant predictors of trust in AI as well as many individual antecedents such as AI reliability and anthropomorphism, among many others.

Conclusion: Overall, the results of this meta-analysis determined several factors that influence trust, including some that have no bearing on AI performance. Additionally, we highlight the areas where there is currently no empirical research.

Application: Findings from this analysis will allow designers to build systems that elicit higher or lower levels of trust, as they require.

Keywords: artificial intelligence; human–automation interaction; meta-analysis; trust.

Publication types

  • Meta-Analysis

MeSH terms

  • Artificial Intelligence*
  • Automation
  • Humans
  • Reproducibility of Results
  • Trust*