Assessing the performance of ChatGPT in bioethics: a large language model's moral compass in medicine

J Med Ethics. 2024 Jan 23;50(2):97-101. doi: 10.1136/jme-2023-109366.

Abstract

Chat Generative Pre-Trained Transformer (ChatGPT) has been a growing point of interest in medical education yet has not been assessed in the field of bioethics. This study evaluated the accuracy of ChatGPT-3.5 (April 2023 version) in answering text-based, multiple choice bioethics questions at the level of US third-year and fourth-year medical students. A total of 114 bioethical questions were identified from the widely utilised question banks UWorld and AMBOSS. Accuracy, bioethical categories, difficulty levels, specialty data, error analysis and character count were analysed. We found that ChatGPT had an accuracy of 59.6%, with greater accuracy in topics surrounding death and patient-physician relationships and performed poorly on questions pertaining to informed consent. Of all the specialties, it performed best in paediatrics. Yet, certain specialties and bioethical categories were under-represented. Among the errors made, it tended towards content errors and application errors. There were no significant associations between character count and accuracy. Nevertheless, this investigation contributes to the ongoing dialogue on artificial intelligence's (AI) role in healthcare and medical education, advocating for further research to fully understand AI systems' capabilities and constraints in the nuanced field of medical bioethics.

Keywords: Decision Making; Education; Ethics- Medical.

MeSH terms

  • Artificial Intelligence
  • Child
  • Education, Medical*
  • Humans
  • Language
  • Medicine*
  • Morals