Objectives: To evaluate the performance of ChatGPT in a French medical school entrance examination.
Methods: A cross-sectional study using a consecutive sample of text-based multiple-choice practice questions for the Parcours d'Accès Spécifique Santé. ChatGPT answered questions in French. We compared performance of ChatGPT in obstetrics and gynecology (OBGYN) and in the whole test.
Results: Overall, 885 questions were evaluated. The mean test score was 34.0% (306; maximal score of 900). The performance of ChatGPT was 33.0% (292 correct answers, 885 questions). The performance of ChatGPT was lower in biostatistics (13.3% ± 19.7%) than in anatomy (34.2% ± 17.9%; P = 0.037) and also lower than in histology and embryology (40.0% ± 18.5%; P = 0.004). The OBGYN part had 290 questions. There was no difference in the test scores and the performance of ChatGPT in OBGYN versus the whole entrance test (P = 0.76 vs P = 0.10, respectively).
Conclusions: ChatGPT answered one-third of questions correctly in the French test preparation. The performance in OBGYN was similar.
Keywords: ChatGPT; French; OBGYN; large language models; performance; test.
© 2023 The Authors. International Journal of Gynecology & Obstetrics published by John Wiley & Sons Ltd on behalf of International Federation of Gynecology and Obstetrics.