Leveraging generative AI for clinical evidence synthesis needs to ensure trustworthiness

J Biomed Inform. 2024 May:153:104640. doi: 10.1016/j.jbi.2024.104640. Epub 2024 Apr 10.

Abstract

Evidence-based medicine promises to improve the quality of healthcare by empowering medical decisions and practices with the best available evidence. The rapid growth of medical evidence, which can be obtained from various sources, poses a challenge in collecting, appraising, and synthesizing the evidential information. Recent advancements in generative AI, exemplified by large language models, hold promise in facilitating the arduous task. However, developing accountable, fair, and inclusive models remains a complicated undertaking. In this perspective, we discuss the trustworthiness of generative AI in the context of automated summarization of medical evidence.

Keywords: Evidence-based medicine; Large language models; Medical evidence summarization; Trustworthy generative AI.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Artificial Intelligence*
  • Evidence-Based Medicine*
  • Humans
  • Natural Language Processing
  • Trust