From technical to understandable: Artificial Intelligence Large Language Models improve the readability of knee radiology reports

Knee Surg Sports Traumatol Arthrosc. 2024 May;32(5):1077-1086. doi: 10.1002/ksa.12133. Epub 2024 Mar 15.

Abstract

Purpose: The purpose of this study was to evaluate the effectiveness of an Artificial Intelligence-Large Language Model (AI-LLM) at improving the readability of knee radiology reports.

Methods: Reports of 100 knee X-rays, 100 knee computed tomography (CT) scans and 100 knee magnetic resonance imaging (MRI) scans were retrieved. The following prompt command was inserted into the AI-LLM: 'Explain this radiology report to a patient in layman's terms in the second person:[Report Text]'. The Flesch-Kincaid reading level (FKRL) score, Flesch reading ease (FRE) score and report length were calculated for the original radiology report and the AI-LLM generated report. Any 'hallucination' or inaccurate text produced by the AI-LLM-generated report was documented.

Results: Statistically significant improvements in mean FKRL scores in the AI-LLM generated X-ray report (12.7 ± 1.0-7.2 ± 0.6), CT report (13.4 ± 1.0-7.5 ± 0.5) and MRI report (13.5 ± 0.9-7.5 ± 0.6) were observed. Statistically significant improvements in mean FRE scores in the AI-LLM generated X-ray report (39.5 ± 7.5-76.8 ± 5.1), CT report (27.3 ± 5.9-73.1 ± 5.6) and MRI report (26.8 ± 6.4-73.4 ± 5.0) were observed. Superior FKRL scores and FRE scores were observed in the AI-LLM-generated X-ray report compared to the AI-LLM-generated CT report and MRI report, p < 0.001. The hallucination rates in the AI-LLM generated X-ray report, CT report and MRI report were 2%, 5% and 5%, respectively.

Conclusions: This study highlights the promising use of AI-LLMs as an innovative, patient-centred strategy to improve the readability of knee radiology reports. The clinical relevance of this study is that an AI-LLM-generated knee radiology report may enhance patients' understanding of their imaging reports, potentially reducing the responder burden placed on the ordering physicians. However, due to the 'hallucinations' produced by the AI-LLM-generated report, the ordering physician must always engage in a collaborative discussion with the patient regarding both reports and the corresponding images.

Level of evidence: Level IV.

Keywords: artificial intelligence; large language models; radiology reports.

MeSH terms

  • Artificial Intelligence*
  • Comprehension*
  • Humans
  • Knee Joint / diagnostic imaging
  • Magnetic Resonance Imaging*
  • Tomography, X-Ray Computed*

Grants and funding