Trust in automated vehicles: constructs, psychological processes, and assessment

Front Psychol. 2023 Nov 23:14:1279271. doi: 10.3389/fpsyg.2023.1279271. eCollection 2023.

Abstract

There is a growing body of research on trust in driving automation systems. In this paper, we seek to clarify the way trust is conceptualized, calibrated and measured taking into account issues related to specific levels of driving automation. We find that: (1) experience plays a vital role in trust calibration; (2) experience should be measured not just in terms of distance traveled, but in terms of the range of situations encountered; (3) system malfunctions and recovery from such malfunctions is a fundamental part of this experience. We summarize our findings in a framework describing the dynamics of trust calibration. We observe that methods used to quantify trust often lack objectivity, reliability, and validity, and propose a set of recommendations for researchers seeking to select suitable trust measures for their studies. In conclusion, we argue that the safe deployment of current and future automated vehicles depends on drivers developing appropriate levels of trust. Given the potentially severe consequences of miscalibrated trust, it is essential that drivers incorporate the possibility of new and unexpected driving situations in their mental models of system capabilities. It is vitally important that we develop methods that contribute to this goal.

Keywords: SAE levels; automated driving; automated vehicles; human factors; self-driving; trust; trust calibration; trust in automation.

Grants and funding

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.