Using ChatGPT to evaluate cancer myths and misconceptions: artificial intelligence and cancer information

JNCI Cancer Spectr. 2023 Mar 1;7(2):pkad015. doi: 10.1093/jncics/pkad015.

Abstract

Data about the quality of cancer information that chatbots and other artificial intelligence systems provide are limited. Here, we evaluate the accuracy of cancer information on ChatGPT compared with the National Cancer Institute's (NCI's) answers by using the questions on the "Common Cancer Myths and Misconceptions" web page. The NCI's answers and ChatGPT answers to each question were blinded, and then evaluated for accuracy (accurate: yes vs no). Ratings were evaluated independently for each question, and then compared between the blinded NCI and ChatGPT answers. Additionally, word count and Flesch-Kincaid readability grade level for each individual response were evaluated. Following expert review, the percentage of overall agreement for accuracy was 100% for NCI answers and 96.9% for ChatGPT outputs for questions 1 through 13 (ĸ = ‒0.03, standard error = 0.08). There were few noticeable differences in the number of words or the readability of the answers from NCI or ChatGPT. Overall, the results suggest that ChatGPT provides accurate information about common cancer myths and misconceptions.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Artificial Intelligence*
  • Humans
  • National Cancer Institute (U.S.)
  • Neoplasms* / diagnosis
  • United States / epidemiology