Weakly-supervised learning of multi-modal features for regularised iterative descent in 3D image registration

Med Image Anal. 2021 Jan:67:101822. doi: 10.1016/j.media.2020.101822. Epub 2020 Oct 6.

Abstract

Methods for deep learning based medical image registration have only recently approached the quality of classical model-based image alignment. The dual challenge of both a very large trainable parameter space and often insufficient availability of expert supervised correspondence annotations has led to slower progress compared to other domains such as image segmentation. Yet, image registration could also more directly benefit from an iterative solution than segmentation. We therefore believe that significant improvements, in particular for multi-modal registration, can be achieved by disentangling appearance-based feature learning and deformation estimation. In this work, we examine an end-to-end trainable, weakly-supervised deep learning-based feature extraction approach that is able to map the complex appearance to a common space. Our results on thoracoabdominal CT and MRI image registration show that the proposed method compares favourably well to state-of-the-art hand-crafted multi-modal features, Mutual Information-based approaches and fully-integrated CNN-based methods - and handles even the limitation of small and only weakly-labeled training data sets.

Keywords: Image registration; Machine learning; Multi-Modal features.

MeSH terms

  • Humans
  • Imaging, Three-Dimensional*
  • Magnetic Resonance Imaging*
  • Supervised Machine Learning