Unpaired Multi-Modal Segmentation via Knowledge Distillation

IEEE Trans Med Imaging. 2020 Jul;39(7):2415-2425. doi: 10.1109/TMI.2019.2963882. Epub 2020 Feb 3.

Abstract

Multi-modal learning is typically performed with network architectures containing modality-specific layers and shared layers, utilizing co-registered images of different modalities. We propose a novel learning scheme for unpaired cross-modality image segmentation, with a highly compact architecture achieving superior segmentation accuracy. In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI, and only employ modality-specific internal normalization layers which compute respective statistics. To effectively train such a highly compact model, we introduce a novel loss term inspired by knowledge distillation, by explicitly constraining the KL-divergence of our derived prediction distributions between modalities. We have extensively validated our approach on two multi-class segmentation problems: i) cardiac structure segmentation, and ii) abdominal organ segmentation. Different network settings, i.e., 2D dilated network and 3D U-net, are utilized to investigate our method's general efficacy. Experimental results on both tasks demonstrate that our novel multi-modal learning scheme consistently outperforms single-modal training and previous multi-modal approaches.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms
  • Magnetic Resonance Imaging
  • Neural Networks, Computer*