Accuracy of ChatGPT-3.5 and -4 in providing scientific references in otolaryngology-head and neck surgery

Eur Arch Otorhinolaryngol. 2024 Apr;281(4):2159-2165. doi: 10.1007/s00405-023-08441-8. Epub 2024 Jan 11.

Abstract

Introduction: Chatbot generative pre-trained transformer (ChatGPT) is a new artificial intelligence-powered language model of chatbot able to help otolaryngologists in practice and research. We investigated the accuracy of ChatGPT-3.5 and -4 in the referencing of manuscripts published in otolaryngology.

Methods: ChatGPT-3.5 and ChatGPT-4 were interrogated for providing references of the top-30 most cited papers in otolaryngology in the past 40 years including clinical guidelines and key studies that changed the practice. The responses were regenerated three times to assess the accuracy and stability of ChatGPT. ChatGPT-3.5 and ChatGPT-4 were compared for accuracy of reference and potential mistakes.

Results: The accuracy of ChatGPT-3.5 and ChatGPT-4.0 ranged from 47% to 60%, and 73% to 87%, respectively (p < 0.005). ChatGPT-3.5 provided 19 inaccurate references and invented 2 references throughout the regenerated questions. ChatGPT-4.0 provided 13 inaccurate references, while it proposed only one invented reference. The stability of responses throughout regenerated answers was mild (k = 0.238) and moderate (k = 0.408) for ChatGPT-3.5 and 4.0, respectively.

Conclusions: ChatGPT-4.0 reported higher accuracy than the free-access version (3.5). False references were detected in both 3.5 and 4.0 versions. Practitioners need to be careful regarding the use of ChatGPT in the reach of some key reference when writing a report.

Keywords: Artificial intelligence; ChatGPT; Chatbot; Head neck surgery; Otolaryngology; Reference.

MeSH terms

  • Artificial Intelligence*
  • Humans
  • Language
  • Otolaryngologists
  • Otolaryngology*
  • Software