Accuracy and consistency of online large language model-based artificial intelligence chat platforms in answering patients' questions about heart failure

Int J Cardiol. 2024 Apr 30:408:132115. doi: 10.1016/j.ijcard.2024.132115. Online ahead of print.

Abstract

Background: Heart failure (HF) is a prevalent condition associated with significant morbidity. Patients may have questions that they feel embarrassed to ask or will face delays awaiting responses from their healthcare providers which may impact their health behavior. We aimed to investigate the potential of large language model (LLM) based artificial intelligence (AI) chat platforms in complementing the delivery of patient-centered care.

Methods: Using online patient forums and physician experience, we created 30 questions related to diagnosis, management and prognosis of HF. The questions were posed to two LLM-based AI chat platforms (OpenAI's ChatGPT-3.5 and Google's Bard). Each set of answers was evaluated by two HF experts, independently and blinded to each other, for accuracy (adequacy of content) and consistency of content.

Results: ChatGPT provided mostly appropriate answers (27/30, 90%) and showed a high degree of consistency (93%). Bard provided a similar content in its answers and thus was evaluated only for adequacy (23/30, 77%). The two HF experts' grades were concordant in 83% and 67% of the questions for ChatGPT and Bard, respectively.

Conclusion: LLM-based AI chat platforms demonstrate potential in improving HF education and empowering patients, however, these platforms currently suffer from issues related to factual errors and difficulty with more contemporary recommendations. This inaccurate information may pose serious and life-threatening implications for patients that should be considered and addressed in future research.

Keywords: Artificial intelligence; Chatbots; Heart failure; Patient centered care.