Developing trustworthy artificial intelligence: insights from research on interpersonal, human-automation, and human-AI trust

Front Psychol. 2024 Apr 17:15:1382693. doi: 10.3389/fpsyg.2024.1382693. eCollection 2024.

Abstract

The rapid advancement of artificial intelligence (AI) has impacted society in many aspects. Alongside this progress, concerns such as privacy violation, discriminatory bias, and safety risks have also surfaced, highlighting the need for the development of ethical, responsible, and socially beneficial AI. In response, the concept of trustworthy AI has gained prominence, and several guidelines for developing trustworthy AI have been proposed. Against this background, we demonstrate the significance of psychological research in identifying factors that contribute to the formation of trust in AI. Specifically, we review research findings on interpersonal, human-automation, and human-AI trust from the perspective of a three-dimension framework (i.e., the trustor, the trustee, and their interactive context). The framework synthesizes common factors related to trust formation and maintenance across different trust types. These factors point out the foundational requirements for building trustworthy AI and provide pivotal guidance for its development that also involves communication, education, and training for users. We conclude by discussing how the insights in trust research can help enhance AI's trustworthiness and foster its adoption and application.

Keywords: AI ethics; competence; human-AI trust; human-automation trust; interpersonal trust; trust measurement; trustworthy AI; warmth.

Publication types

  • Review

Grants and funding

The authors declare financial support was received for the research and publication of this article. This work was supported by research grants from the National Natural Science Foundation of China [grant number 32171074] and the Institute of Psychology, Chinese Academy of Sciences [grant number E1CX0230] awarded to SL.