Theory of trust and acceptance of artificial intelligence technology (TrAAIT): An instrument to assess clinician trust and acceptance of artificial intelligence

J Biomed Inform. 2023 Dec:148:104550. doi: 10.1016/j.jbi.2023.104550. Epub 2023 Nov 20.

Abstract

Background: Artificial intelligence and machine learning (AI/ML) technologies like generative and ambient AI solutions are proliferating in real-world healthcare settings. Clinician trust affects adoption and impact of these systems. Organizations need a validated method to assess factors underlying trust and acceptance of AI for clinical workflows in order to improve adoption and the impact of AI.

Objective: Our study set out to develop and assess a novel clinician-centered model to measure and explain trust and adoption of AI technology. We hypothesized that clinicians' system-specific Trust in AI is the primary predictor of both Acceptance (i.e., willingness to adopt), and post-adoption Trusting Stance (i.e., general stance towards any AI system). We validated the new model at an urban comprehensive cancer center. We produced an easily implemented survey tool for measuring clinician trust and adoption of AI.

Methods: This survey-based, cross-sectional, psychometric study included a model development phase and validation phase. Measurement was done with five-point ascending unidirectional Likert scales. The development sample included N = 93 clinicians (physicians, advanced practice providers, nurses) that used an AI-based communication application. The validation sample included N = 73 clinicians that used a commercially available AI-powered speech-to-text application for note-writing in an electronic health record (EHR). Analytical procedures included exploratory factor analysis (EFA), confirmatory factor analysis (CFA), and partial least squares structural equation modeling (PLS-SEM). The Johnson-Neyman (JN) methodology was used to determine moderator effects.

Results: In the fully moderated causal model, clinician trust explained a large amount of variance in their acceptance of a specific AI application (56%) and their post-adoption general trusting stance towards AI in general (36%). Moderators included organizational assurances, length of time using the application, and clinician age. The final validated instrument has 20 items and takes 5 min to complete on average.

Conclusions: We found that clinician acceptance of AI is determined by their degree of trust formed via information credibility, perceived application value, and reliability. The novel model, TrAAIT, explains factors underlying AI trustworthiness and acceptance for clinicians. With its easy-to-use instrument and Summative Score Dashboard, TrAAIT can help organizations implementing AI to identify and intercept barriers to clinician adoption in real-world settings.

Keywords: AI/ML adoption; Clinician trust; Digital healthcare; Human computer interaction; Technology acceptance; Trustworthy artificial intelligence/machine learning (AI/ML).

Publication types

  • Validation Study
  • Research Support, N.I.H., Extramural

MeSH terms

  • Artificial Intelligence*
  • Attitude of Health Personnel*
  • Cross-Sectional Studies
  • Humans
  • Psychometrics
  • Reproducibility of Results
  • Surveys and Questionnaires
  • Technology
  • Trust*