Performance of AI chatbots on controversial topics in oral medicine, pathology, and radiology

Oral Surg Oral Med Oral Pathol Oral Radiol. 2024 May;137(5):508-514. doi: 10.1016/j.oooo.2024.01.015. Epub 2024 Feb 6.

Abstract

Objectives: In this study, we assessed 6 different artificial intelligence (AI) chatbots (Bing, GPT-3.5, GPT-4, Google Bard, Claude, Sage) responses to controversial and difficult questions in oral pathology, oral medicine, and oral radiology.

Study design: The chatbots' answers were evaluated by board-certified specialists using a modified version of the global quality score on a 5-point Likert scale. The quality and validity of chatbot citations were evaluated.

Results: Claude had the highest mean score of 4.341 ± 0.582 for oral pathology and medicine. Bing had the lowest scores of 3.447 ± 0.566. In oral radiology, GPT-4 had the highest mean score of 3.621 ± 1.009 and Bing the lowest score of 2.379 ± 0.978. GPT-4 achieved the highest mean score of 4.066 ± 0.825 for performance across all disciplines. 82 out of 349 (23.50%) of generated citations from chatbots were fake.

Conclusions: The most superior chatbot in providing high-quality information for controversial topics in various dental disciplines was GPT-4. Although the majority of chatbots performed well, it is suggested that developers of AI medical chatbots incorporate scientific citation authenticators to validate the outputted citations given the relatively high number of fabricated citations.

MeSH terms

  • Artificial Intelligence*
  • Humans
  • Oral Medicine*
  • Pathology, Oral
  • Radiology