CBCT-to-CT Synthesis for Cervical Cancer Adaptive Radiotherapy via U-Net-Based Model Hierarchically Trained with Hybrid Dataset

Cancers (Basel). 2023 Nov 20;15(22):5479. doi: 10.3390/cancers15225479.

Abstract

Purpose: To develop a deep learning framework based on a hybrid dataset to enhance the quality of CBCT images and obtain accurate HU values.

Materials and methods: A total of 228 cervical cancer patients treated in different LINACs were enrolled. We developed an encoder-decoder architecture with residual learning and skip connections. The model was hierarchically trained and validated on 5279 paired CBCT/planning CT images and tested on 1302 paired images. The mean absolute error (MAE), peak signal to noise ratio (PSNR), and structural similarity index (SSIM) were utilized to access the quality of the synthetic CT images generated by our model.

Results: The MAE between synthetic CT images generated by our model and planning CT was 10.93 HU, compared to 50.02 HU for the CBCT images. The PSNR increased from 27.79 dB to 33.91 dB, and the SSIM increased from 0.76 to 0.90. Compared with synthetic CT images generated by the convolution neural networks with residual blocks, our model had superior performance both in qualitative and quantitative aspects.

Conclusions: Our model could synthesize CT images with enhanced image quality and accurate HU values. The synthetic CT images preserved the edges of tissues well, which is important for downstream tasks in adaptive radiotherapy.

Keywords: adaptive radiotherapy; artifacts removal; cervical cancer; hierarchical training; image enhancement; synthetic CT.