A vignette-based evaluation of ChatGPT's ability to provide appropriate and equitable medical advice across care contexts

Sci Rep. 2023 Oct 19;13(1):17885. doi: 10.1038/s41598-023-45223-y.

Abstract

ChatGPT is a large language model trained on text corpora and reinforced with human supervision. Because ChatGPT can provide human-like responses to complex questions, it could become an easily accessible source of medical advice for patients. However, its ability to answer medical questions appropriately and equitably remains unknown. We presented ChatGPT with 96 advice-seeking vignettes that varied across clinical contexts, medical histories, and social characteristics. We analyzed responses for clinical appropriateness by concordance with guidelines, recommendation type, and consideration of social factors. Ninety-three (97%) responses were appropriate and did not explicitly violate clinical guidelines. Recommendations in response to advice-seeking questions were completely absent (N = 34, 35%), general (N = 18, 18%), or specific (N = 44, 46%). 53 (55%) explicitly considered social factors like race or insurance status, which in some cases changed clinical recommendations. ChatGPT consistently provided background information in response to medical questions but did not reliably offer appropriate and personalized medical advice.

Publication types

  • Research Support, N.I.H., Extramural

MeSH terms

  • Female
  • Humans
  • Insurance Coverage*
  • Language*
  • Social Factors
  • Uterus