Deep learning-based segmentation in prostate radiation therapy using Monte Carlo simulated cone-beam computed tomography

Med Phys. 2022 Nov;49(11):6930-6944. doi: 10.1002/mp.15946. Epub 2022 Aug 31.

Abstract

Purpose: Segmenting organs in cone-beam CT (CBCT) images would allow to adapt the radiotherapy based on the organ deformations that may occur between treatment fractions. However, this is a difficult task because of the relative lack of contrast in CBCT images, leading to high inter-observer variability. Deformable image registration (DIR) and deep-learning based automatic segmentation approaches have shown interesting results for this task in the past years. However, they are either sensitive to large organ deformations, or require to train a convolutional neural network (CNN) from a database of delineated CBCT images, which is difficult to do without improvement of image quality. In this work, we propose an alternative approach: to train a CNN (using a deep learning-based segmentation tool called nnU-Net) from a database of artificial CBCT images simulated from planning CT, for which it is easier to obtain the organ contours.

Methods: Pseudo-CBCT (pCBCT) images were simulated from readily available segmented planning CT images, using the GATE Monte Carlo simulation. CT reference delineations were copied onto the pCBCT, resulting in a database of segmented images used to train the neural network. The studied segmentation contours were: bladder, rectum, and prostate contours. We trained multiple nnU-Net models using different training: (1) segmented real CBCT, (2) pCBCT, (3) segmented real CT and tested on pseudo-CT (pCT) generated from CBCT with cycleGAN, and (4) a combination of (2) and (3). The evaluation was performed on different datasets of segmented CBCT or pCT by comparing predicted segmentations with reference ones thanks to Dice similarity score and Hausdorff distance. A qualitative evaluation was also performed to compare DIR-based and nnU-Net-based segmentations.

Results: Training with pCBCT was found to lead to comparable results to using real CBCT images. When evaluated on CBCT obtained from the same hospital as the CT images used in the simulation of the pCBCT, the model trained with pCBCT scored mean DSCs of 0.92 ± 0.05, 0.87 ± 0.02, and 0.85 ± 0.04 and mean Hausdorff distance 4.67 ± 3.01, 3.91 ± 0.98, and 5.00 ± 1.32 for the bladder, rectum, and prostate contours respectively, while the model trained with real CBCT scored mean DSCs of 0.91 ± 0.06, 0.83 ± 0.07, and 0.81 ± 0.05 and mean Hausdorff distance 5.62 ± 3.24, 6.43 ± 5.11, and 6.19 ± 1.14 for the bladder, rectum, and prostate contours, respectively. It was also found to outperform models using pCT or a combination of both, except for the prostate contour when tested on a dataset from a different hospital. Moreover, the resulting segmentations demonstrated a clinical acceptability, where 78% of bladder segmentations, 98% of rectum segmentations, and 93% of prostate segmentations required minor or no corrections, and for 76% of the patients, all structures of the patient required minor or no corrections.

Conclusion: We proposed to use simulated CBCT images to train a nnU-Net segmentation model, avoiding the need to gather complex and time-consuming reference delineations on CBCT images.

Keywords: CBCT; Monte Carlo simulation; cancer; deep learning; prostate; segmentation.

MeSH terms

  • Cone-Beam Computed Tomography
  • Deep Learning*
  • Humans
  • Male
  • Prostate / diagnostic imaging