Human intelligence versus Chat-GPT: who performs better in correctly classifying patients in triage?

Am J Emerg Med. 2024 May:79:44-47. doi: 10.1016/j.ajem.2024.02.008. Epub 2024 Feb 7.

Abstract

Introduction: Chat-GPT is rapidly emerging as a promising and potentially revolutionary tool in medicine. One of its possible applications is the stratification of patients according to the severity of clinical conditions and prognosis during the triage evaluation in the emergency department (ED).

Methods: Using a randomly selected sample of 30 vignettes recreated from real clinical cases, we compared the concordance in risk stratification of ED patients between healthcare personnel and Chat-GPT. The concordance was assessed with Cohen's kappa, and the performance was evaluated with the area under the receiver operating characteristic curve (AUROC) curves. Among the outcomes, we considered mortality within 72 h, the need for hospitalization, and the presence of a severe or time-dependent condition.

Results: The concordance in triage code assignment between triage nurses and Chat-GPT was 0.278 (unweighted Cohen's kappa; 95% confidence intervals: 0.231-0.388). For all outcomes, the ROC values were higher for the triage nurses. The most relevant difference was found in 72-h mortality, where triage nurses showed an AUROC of 0.910 (0.757-1.000) compared to only 0.669 (0.153-1.000) for Chat-GPT.

Conclusions: The current level of Chat-GPT reliability is insufficient to make it a valid substitute for the expertise of triage nurses in prioritizing ED patients. Further developments are required to enhance the safety and effectiveness of AI for risk stratification of ED patients.

Keywords: Advanced nurse practice; Artificial intelligence; ChatGPT; Manchester triage system; Nursing; Triage.

MeSH terms

  • Emergency Service, Hospital
  • Hospitalization*
  • Humans
  • Patients
  • Reproducibility of Results
  • Triage*