Pivotal Clinical Study to Evaluate the Efficacy and Safety of Assistive Artificial Intelligence-Based Software for Cervical Cancer Diagnosis

J Clin Med. 2023 Jun 13;12(12):4024. doi: 10.3390/jcm12124024.

Abstract

Colposcopy is the gold standard diagnostic tool for identifying cervical lesions. However, the accuracy of colposcopies depends on the proficiency of the colposcopist. Machine learning algorithms using an artificial intelligence (AI) system can quickly process large amounts of data and have been successfully applied in several clinical situations. This study evaluated the feasibility of an AI system as an assistive tool for diagnosing high-grade cervical intraepithelial neoplasia lesions compared to the human interpretation of cervical images. This two-centered, crossover, double-blind, randomized controlled trial included 886 randomly selected images. Four colposcopists (two proficient and two inexperienced) independently evaluated cervical images, once with and the other time without the aid of the Cerviray AI® system (AIDOT, Seoul, Republic of Korea). The AI aid demonstrated improved areas under the curve on the localization receiver-operating characteristic curve compared with the colposcopy impressions of colposcopists (difference 0.12, 95% confidence interval, 0.10-0.14, p < 0.001). Sensitivity and specificity also improved when using the AI system (89.18% vs. 71.33%; p < 0.001, 96.68% vs. 92.16%; p < 0.001, respectively). Additionally, the classification accuracy rate improved with the aid of AI (86.40% vs. 75.45%; p < 0.001). Overall, the AI system could be used as an assistive diagnostic tool for both proficient and inexperienced colposcopists in cervical cancer screenings to estimate the impression and location of pathologic lesions. Further utilization of this system could help inexperienced colposcopists confirm where to perform a biopsy to diagnose high-grade lesions.

Keywords: artificial intelligence; cervical cancer screening; colposcopy; deep learning; machine learning.

Grants and funding

This research was supported by the CHA university grant (2021-08-001).