Artificial intelligence chatbot performance in triage of ophthalmic conditions

Can J Ophthalmol. 2023 Aug 9:S0008-4182(23)00234-X. doi: 10.1016/j.jcjo.2023.07.016. Online ahead of print.

Abstract

Background: Timely access to human expertise for affordable and efficient triage of ophthalmic conditions is inconsistent. With recent advancements in publicly available artificial intelligence (AI) chatbots, the lay public may turn to these tools for triage of ophthalmic complaints. Validation studies are necessary to evaluate the performance of AI chatbots as triage tools and inform the public regarding their safety.

Objective: To evaluate the triage performance of AI chatbots for ophthalmic conditions.

Design: Cross-sectional study.

Setting: Single centre.

Participants: Ophthalmology trainees, OpenAI ChatGPT (GPT-4), Bing Chat, and WebMD Symptom Checker.

Methods: Forty-four clinical vignettes representing common ophthalmic complaints were developed, and a standardized pathway of prompts was presented to each tool in March 2023. Primary outcomes were proportion of responses with the correct diagnosis listed in the top 3 possible diagnoses and proportion with correct triage urgency. Ancillary outcomes included presence of grossly inaccurate statements, mean reading grade level, mean response word count, proportion with attribution, and most common sources cited.

Results: The ophthalmologists in training, ChatGPT, Bing Chat, and the WebMD Symptom Checker listed the appropriate diagnosis among the top 3 suggestions in 42 (95%), 41 (93%), 34 (77%), and 8 (33%) cases, respectively. Triage urgency was appropriate in 38 (86%), 43 (98%), and 37 (84%) cases for ophthalmology trainees, ChatGPT, and Bing Chat, correspondingly.

Conclusions: ChatGPT using the GPT-4 model offered high diagnostic and triage accuracy that was comparable with that of ophthalmology trainees with no grossly inaccurate statements. Bing Chat had lower accuracy and a tendency to overestimate triage urgency.