A Novel 3D Unsupervised Domain Adaptation Framework for Cross-Modality Medical Image Segmentation

IEEE J Biomed Health Inform. 2022 Oct;26(10):4976-4986. doi: 10.1109/JBHI.2022.3162118. Epub 2022 Oct 4.

Abstract

We consider the problem of volumetric (3D) unsupervised domain adaptation (UDA) in cross-modality medical image segmentation, aiming to perform segmentation on the unannotated target domain (e.g. MRI) with the help of labeled source domain (e.g. CT). Previous UDA methods in medical image analysis usually suffer from two challenges: 1) they focus on processing and analyzing data at 2D level only, thus missing semantic information from the depth level; 2) one-to-one mapping is adopted during the style-transfer process, leading to insufficient alignment in the target domain. Different from the existing methods, in our work, we conduct a first of its kind investigation on multi-style image translation for complete image alignment to alleviate the domain shift problem, and also introduce 3D segmentation in domain adaptation tasks to maintain semantic consistency at the depth level. In particular, we develop an unsupervised domain adaptation framework incorporating a novel quartet self-attention module to efficiently enhance relationships between widely separated features in spatial regions on a higher dimension, leading to a substantial improvement in segmentation accuracy in the unlabeled target domain. In two challenging cross-modality tasks, specifically brain structures and multi-organ abdominal segmentation, our model is shown to outperform current state-of-the-art methods by a significant margin, demonstrating its potential as a benchmark resource for the biomedical and health informatics research community.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Abdomen*
  • Brain / diagnostic imaging
  • Humans
  • Image Processing, Computer-Assisted / methods
  • Magnetic Resonance Imaging* / methods