C2 -GAN: Content-consistent generative adversarial networks for unsupervised domain adaptation in medical image segmentation

Med Phys. 2022 Oct;49(10):6491-6504. doi: 10.1002/mp.15944. Epub 2022 Aug 27.

Abstract

Purpose: In clinical practice, medical image analysis has played a key role in disease diagnosis. One of the important steps is to perform an accurate organ or tissue segmentation for assisting medical professionals in making correct diagnoses. Despite the tremendous progress in the deep learning-based medical image segmentation approaches, they often fail to generalize to test datasets due to distribution discrepancies across domains. Recent advances aligning the domain gaps by using bi-directional GANs (e.g., CycleGAN) have shown promising results, but the strict constraints of the cycle consistency hamper these methods from yielding better performance. The purpose of this study is to propose a novel bi-directional GAN-based segmentation model with fewer constraints on the cycle consistency to improve the generalized segmentation results.

Methods: We propose a novel unsupervised domain adaptation approach by designing content-consistent generative adversarial networks ( C 2 -GAN $\text{C}^2\text{-GAN}$ ) for medical image segmentation. First, we introduce content consistency instead of cycle consistency to relax the constraint of the invertibility map to encourage the synthetic domain generated with a large domain transportation distance. The synthetic domain is thus pulled close to the target domain for the reduction of domain discrepancy. Second, we suggest a novel style transfer loss based on the difference in low-frequency magnitude to further mitigate the appearance shifts across domains.

Results: We validate our proposed approach on three public X-ray datasets, including the Montgomery, JSRT, and Shenzhen datasets. For an accurate evaluation, we randomly divided the images of each dataset into 70% for training, 10% for evaluation, and 20% for testing. The mean Dice was 95.73 ± 0.22%, 95.16 ± 1.42% for JSRT and Shenzhen datasets, respectively. For the recall and precision metrics, our model also achieved better or comparable performance than the state-of-the-art CycleGAN-based UDA approaches.

Conclusions: The experimental results validate the effectiveness of our method in mitigating the domain gaps and improving generalized segmentation results for X-ray image segmentation.

Keywords: generative adversarial networks; medical image segmentation; unsupervised domain adaptation.

MeSH terms

  • Image Processing, Computer-Assisted* / methods
  • Neural Networks, Computer*