Evaluating Chat Generative Pre-trained Transformer Responses to Common Pediatric In-toeing Questions

J Pediatr Orthop. 2024 Apr 30. doi: 10.1097/BPO.0000000000002695. Online ahead of print.

Abstract

Objective: Chat generative pre-trained transformer (ChatGPT) has garnered attention in health care for its potential to reshape patient interactions. As patients increasingly rely on artificial intelligence platforms, concerns about information accuracy arise. In-toeing, a common lower extremity variation, often leads to pediatric orthopaedic referrals despite observation being the primary treatment. Our study aims to assess ChatGPT's responses to pediatric in-toeing questions, contributing to discussions on health care innovation and technology in patient education.

Methods: We compiled a list of 34 common in-toeing questions from the "Frequently Asked Questions" sections of 9 health care-affiliated websites, identifying 25 as the most encountered. On January 17, 2024, we queried ChatGPT 3.5 in separate sessions and recorded the responses. These 25 questions were posed again on January 21, 2024, to assess its reproducibility. Two pediatric orthopaedic surgeons evaluated responses using a scale of "excellent (no clarification)" to "unsatisfactory (substantial clarification)." Average ratings were used when evaluators' grades were within one level of each other. In discordant cases, the senior author provided a decisive rating.

Results: We found 46% of ChatGPT responses were "excellent" and 44% "satisfactory (minimal clarification)." In addition, 8% of cases were "satisfactory (moderate clarification)" and 2% were "unsatisfactory." Questions had appropriate readability, with an average Flesch-Kincaid Grade Level of 4.9 (±2.1). However, ChatGPT's responses were at a collegiate level, averaging 12.7 (±1.4). No significant differences in ratings were observed between question topics. Furthermore, ChatGPT exhibited moderate consistency after repeated queries, evidenced by a Spearman rho coefficient of 0.55 (P = 0.005). The chatbot appropriately described in-toeing as normal or spontaneously resolving in 62% of responses and consistently recommended evaluation by a health care provider in 100%.

Conclusion: The chatbot presented a serviceable, though not perfect, representation of the diagnosis and management of pediatric in-toeing while demonstrating a moderate level of reproducibility in its responses. ChatGPT's utility could be enhanced by improving readability and consistency and incorporating evidence-based guidelines.

Level of evidence: Level IV-diagnostic.