Multimodal super-resolved q-space deep learning

Med Image Anal. 2021 Jul:71:102085. doi: 10.1016/j.media.2021.102085. Epub 2021 Apr 21.

Abstract

Super-resolvedq-space deep learning (SR-q-DL) has been developed to estimate high-resolution (HR) tissue microstructure maps from low-quality diffusion magnetic resonance imaging (dMRI) scans acquired with a reduced number of diffusion gradients and low spatial resolution, where deep networks are designed for the estimation. However, existing methods do not exploit HR information from other modalities, which are generally acquired together with dMRI and could provide additional useful information for HR tissue microstructure estimation. In this work, we extend SR-q-DL and propose multimodal SR-q-DL, where information in low-resolution (LR) dMRI is combined with HR information from another modality for HR tissue microstructure estimation. Because the HR modality may not be as sensitive to tissue microstructure as dMRI, direct concatenation of multimodal information does not necessarily lead to improved estimation performance. Since existing deep networks for HR tissue microstructure estimation are patch-based and use redundant information in the spatial domain to enhance the spatial resolution, the HR information in the other modality could inform the deep networks about what input voxels are relevant for the computation of tissue microstructure. Thus, we propose to incorporate the HR information from the HR modality by designing an attention module that guides the computation of HR tissue microstructure from LR dMRI. Specifically, the attention module is integrated with the patch-based SR-q-DL framework that exploits the sparsity of diffusion signals. The sparse representation of the LR diffusion signals in the input patch is first computed with a network component that unrolls an iterative process for sparse reconstruction. Then, the proposed attention module computes a relevance map from the HR modality with sequential convolutional layers. The relevance map indicates the relevance of the LR sparse representation at each voxel for computing the patch of HR tissue microstructure. The relevance is applied to the LR sparse representation with voxelwise multiplication, and the weighted LR sparse representation is used to compute HR tissue microstructure with another network component that allows resolution enhancement. All weights in the proposed network for multimodal SR-q-DL are jointly learned and the estimation is end-to-end. To evaluate the proposed method, we performed experiments on brain dMRI scans together with images of additional HR modalities. In the experiments, the proposed method was applied to the estimation of tissue microstructure measures for different datasets and advanced biophysical models, where the benefit of incorporating multimodal information using the proposed method is shown.

Keywords: Diffusion MRI; Multimodal information; Resolution enhancement; Tissue microstructure.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms
  • Deep Learning*
  • Diffusion Magnetic Resonance Imaging
  • Humans
  • Neuroimaging