Deep Learning Whole-Gland and Zonal Prostate Segmentation on a Public MRI Dataset

J Magn Reson Imaging. 2021 Aug;54(2):452-459. doi: 10.1002/jmri.27585. Epub 2021 Feb 26.

Abstract

Background: Prostate volume, as determined by magnetic resonance imaging (MRI), is a useful biomarker both for distinguishing between benign and malignant pathology and can be used either alone or combined with other parameters such as prostate-specific antigen.

Purpose: This study compared different deep learning methods for whole-gland and zonal prostate segmentation.

Study type: Retrospective.

Population: A total of 204 patients (train/test = 99/105) from the PROSTATEx public dataset.

Field strength/sequence: A 3 T, TSE T2 -weighted.

Assessment: Four operators performed manual segmentation of the whole-gland, central zone + anterior stroma + transition zone (TZ), and peripheral zone (PZ). U-net, efficient neural network (ENet), and efficient residual factorized ConvNet (ERFNet) were trained and tuned on the training data through 5-fold cross-validation to segment the whole gland and TZ separately, while PZ automated masks were obtained by the subtraction of the first two.

Statistical tests: Networks were evaluated on the test set using various accuracy metrics, including the Dice similarity coefficient (DSC). Model DSC was compared in both the training and test sets using the analysis of variance test (ANOVA) and post hoc tests. Parameter number, disk size, training, and inference times determined network computational complexity and were also used to assess the model performance differences. A P < 0.05 was selected to indicate the statistical significance.

Results: The best DSC (P < 0.05) in the test set was achieved by ENet: 91% ± 4% for the whole gland, 87% ± 5% for the TZ, and 71% ± 8% for the PZ. U-net and ERFNet obtained, respectively, 88% ± 6% and 87% ± 6% for the whole gland, 86% ± 7% and 84% ± 7% for the TZ, and 70% ± 8% and 65 ± 8% for the PZ. Training and inference time were lowest for ENet.

Data conclusion: Deep learning networks can accurately segment the prostate using T2 -weighted images.

Evidence level: 4 TECHNICAL EFFICACY: Stage 2.

Keywords: deep learning; machine learning; magnetic resonance imaging; prostate; prostatic neoplasms.

MeSH terms

  • Deep Learning*
  • Humans
  • Image Processing, Computer-Assisted
  • Magnetic Resonance Imaging
  • Male
  • Prostatic Neoplasms* / diagnostic imaging
  • Retrospective Studies