Deep learning-Based 3D inpainting of brain MR images

Sci Rep. 2021 Jan 18;11(1):1673. doi: 10.1038/s41598-020-80930-w.

Abstract

The detailed anatomical information of the brain provided by 3D magnetic resonance imaging (MRI) enables various neuroscience research. However, due to the long scan time for 3D MR images, 2D images are mainly obtained in clinical environments. The purpose of this study is to generate 3D images from a sparsely sampled 2D images using an inpainting deep neural network that has a U-net-like structure and DenseNet sub-blocks. To train the network, not only fidelity loss but also perceptual loss based on the VGG network were considered. Various methods were used to assess the overall similarity between the inpainted and original 3D data. In addition, morphological analyzes were performed to investigate whether the inpainted data produced local features similar to the original 3D data. The diagnostic ability using the inpainted data was also evaluated by investigating the pattern of morphological changes in disease groups. Brain anatomy details were efficiently recovered by the proposed neural network. In voxel-based analysis to assess gray matter volume and cortical thickness, differences between the inpainted data and the original 3D data were observed only in small clusters. The proposed method will be useful for utilizing advanced neuroimaging techniques with 2D MRI data.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Aged
  • Brain / anatomy & histology*
  • Brain / diagnostic imaging
  • Deep Learning*
  • Female
  • Humans
  • Image Processing, Computer-Assisted / methods*
  • Imaging, Three-Dimensional / methods*
  • Magnetic Resonance Imaging / methods*
  • Male
  • Neural Networks, Computer
  • Neuroimaging / methods*