The unmet promise of trustworthy AI in healthcare: why we fail at clinical translation

Front Digit Health. 2024 Apr 18:6:1279629. doi: 10.3389/fdgth.2024.1279629. eCollection 2024.

Abstract

Artificial intelligence (AI) has the potential to revolutionize healthcare, for example via decision support systems, computer vision approaches, or AI-based prevention tools. Initial results from AI applications in healthcare show promise but are rarely translated into clinical practice successfully and ethically. This occurs despite an abundance of "Trustworthy AI" guidelines. How can we explain the translational gaps of AI in healthcare? This paper offers a fresh perspective on this problem, showing that failing translation of healthcare AI markedly arises from a lack of an operational definition of "trust" and "trustworthiness". This leads to (a) unintentional misuse concerning what trust (worthiness) is and (b) the risk of intentional abuse by industry stakeholders engaging in ethics washing. By pointing out these issues, we aim to highlight the obstacles that hinder translation of Trustworthy medical AI to practice and prevent it from fulfilling its unmet promises.

Keywords: AI; ethics; healthcare; machine learning; medicine; translation; trust; trustworthiness.

Grants and funding

The author(s) declare that financial support was received for the research, authorship, and/or publication of this article.