Diagnosing uterine cervical cancer on a single T2-weighted image: Comparison between deep learning versus radiologists

Eur J Radiol. 2021 Feb:135:109471. doi: 10.1016/j.ejrad.2020.109471. Epub 2020 Dec 5.

Abstract

Purpose: To compare deep learning with radiologists when diagnosing uterine cervical cancer on a single T2-weighted image.

Methods: This study included 418 patients (age range, 21-91 years; mean, 50.2 years) who underwent magnetic resonance imaging (MRI) between June 2013 and May 2020. We included 177 patients with pathologically confirmed cervical cancer and 241 non-cancer patients. Sagittal T2-weighted images were used for analysis. A deep learning model using convolutional neural networks (DCNN), called Xception architecture, was trained with 50 epochs using 488 images from 117 cancer patients and 509 images from 181 non-cancer patients. It was tested with 60 images for 60 cancer and 60 non-cancer patients. Three blinded experienced radiologists also interpreted these 120 images independently. Sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUC) were compared between the DCNN model and radiologists.

Results: The DCNN model and the radiologists had a sensitivity of 0.883 and 0.783-0.867, a specificity of 0.933 and 0.917-0.950, and an accuracy of 0.908 and 0.867-0.892, respectively. The DCNN model had an equal to, or better, diagnostic performance than the radiologists (AUC = 0.932, and p for accuracy = 0.272-0.62).

Conclusion: Deep learning provided diagnostic performance equivalent to experienced radiologists when diagnosing cervical cancer on a single T2-weighted image.

Keywords: Artificial intelligence; CNN; Cervical carcinoma; Convolutional neural network; Magnetic resonance imaging; T2WI.

MeSH terms

  • Adult
  • Aged
  • Aged, 80 and over
  • Deep Learning*
  • Female
  • Humans
  • Middle Aged
  • Neural Networks, Computer
  • Radiologists
  • Retrospective Studies
  • Uterine Cervical Neoplasms* / diagnostic imaging
  • Young Adult