Performance of an Artificial Intelligence-Based Chatbot (ChatGPT) Answering the European Certification in Implant Dentistry Exam

Int J Prosthodont. 2024 Apr 22;37(2):221-224. doi: 10.11607/ijp.8852.

Abstract

Purpose: To compare the performance of licensed dentists and two software versions (3.5 legacy and 4.0) of an artificial intelligence (AI)-based chatbot (ChatGPT) answering the exam for the 2022 Certification in Implant Dentistry of the European Association for Osseointegration (EAO).

Materials and methods: The 50-question, multiple-choice exam of the EAO for the 2022 Certification in Implant Dentistry was obtained. Three groups were created based on the individual or program answering the exam: licensed dentists (D group) and two software versions of an artificial intelligence (AI)-based chatbot (ChatGPT)-3.5 legacy (ChatGPT-3.5 group) and the 4.0 version (ChatGPT-4.0 group). The EAO provided the results of the 2022 examinees (D group). For the ChatGPT groups, the 50 multiple-choice questions were introduced into both ChatGBT versions, and the answers were recorded. Pearson correlation matrix was used to analyze the linear relationship among the subgroups. The inter- and intraoperator reliability was calculated using Cronbach's alpha coefficient. One-way ANOVA and Tukey post-hoc tests were used to examine the data (α = .05).

Results: ChatGPT was able to pass the exam for the 2022 Certification in Implant Dentistry of the EAO. Additionally, the software version of ChatGPT impacted the score obtained. The 4.0 version not only pass the exam but also obtained a significantly higher score than the 3.5 version and licensed dentists completing the same exam.

Conclusions: The AIbased chatbot tested not only passed the exam but performed better than licensed dentists.

Publication types

  • Comparative Study

MeSH terms

  • Artificial Intelligence*
  • Certification*
  • Dental Implantation / education
  • Educational Measurement* / methods
  • Europe
  • Humans
  • Software