MVFusFra: A Multi-View Dynamic Fusion Framework for Multimodal Brain Tumor Segmentation

IEEE J Biomed Health Inform. 2022 Apr;26(4):1570-1581. doi: 10.1109/JBHI.2021.3122328. Epub 2022 Apr 14.

Abstract

Medical practitioners generally rely on multimodal brain images, for example based on the information from the axial, coronal, and sagittal views, to inform brain tumor diagnosis. Hence, to further utilize the 3D information embedded in such datasets, this paper proposes a multi-view dynamic fusion framework (hereafter, referred to as MVFusFra) to improve the performance of brain tumor segmentation. The proposed framework consists of three key building blocks. First, a multi-view deep neural network architecture, which represents multi learning networks for segmenting the brain tumor from different views and each deep neural network corresponds to multi-modal brain images from one single view. Second, the dynamic decision fusion method, which is mainly used to fuse segmentation results from multi-views into an integrated method. Then, two different fusion methods (i.e., voting and weighted averaging) are used to evaluate the fusing process. Third, the multi-view fusion loss (comprising segmentation loss, transition loss, and decision loss) is proposed to facilitate the training process of multi-view learning networks, so as to ensure consistency in appearance and space, for both fusing segmentation results and the training of the learning network. We evaluate the performance of MVFusFra on the BRATS 2015 and BRATS 2018 datasets. Findings from the evaluations suggest that fusion results from multi-views achieve better performance than segmentation results from the single view, and also implying effectiveness of the proposed multi-view fusion loss. A comparative summary also shows that MVFusFra achieves better segmentation performance, in terms of efficiency, in comparison to other competing approaches.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Brain / diagnostic imaging
  • Brain Neoplasms* / diagnostic imaging
  • Humans
  • Image Processing, Computer-Assisted* / methods
  • Magnetic Resonance Imaging / methods
  • Neural Networks, Computer