Unsupervised Cross-Modality Domain Adaptation Network for X-Ray to CT Registration

IEEE J Biomed Health Inform. 2022 Jun;26(6):2637-2647. doi: 10.1109/JBHI.2021.3135890. Epub 2022 Jun 3.

Abstract

2D/3D registration that achieves high accuracy and real-time computation is one of the enabling technologies for radiotherapy and image-guided surgeries. Recently, the Convolutional Neural Network (CNN) has been explored to significantly improve the accuracy and efficiency of 2D/3D registration. A pair of intraoperative 2-D x-ray images and synthetic data from pre-operative volume are often required to model the nonconvex mappings between registration parameters and image residual. However, a large clinical dataset collection with accurate poses for x-ray images can be very challenging or even impractical, while exclusive training on synthetic data can frequently cause performance degradation when tested on x-rays. Thus, we propose to train a model on source domain (i.e., synthetic data) to build appearance-pose relationship first and then use an unsupervised cross-modality domain adaptation network (UCMDAN) to adapt the model to target domain (i.e., X-rays) through adversarial learning. We propose to narrow the significant domain gap by alignment in both pixel and feature space. In particular, the image appearance transformation and domain-invariance feature learning by multiple aspects are conducted synergistically. Extensive experiments on CT and CBCT dataset show that the proposed UCMDAN outperforms the existing state-of-the-art domain adaptation approaches.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Humans
  • Image Processing, Computer-Assisted*
  • Neural Networks, Computer*
  • Radiography
  • Tomography, X-Ray Computed
  • X-Rays