Deep Residual Inception Encoder-Decoder Network for Medical Imaging Synthesis

IEEE J Biomed Health Inform. 2020 Jan;24(1):39-49. doi: 10.1109/JBHI.2019.2912659. Epub 2019 Apr 22.

Abstract

Image synthesis is a novel solution in precision medicine for scenarios where important medical imaging is not otherwise available. The convolutional neural network (CNN) is an ideal model for this task because of its powerful learning capabilities through the large number of layers and trainable parameters. In this research, we propose a new architecture of residual inception encoder-decoder neural network (RIED-Net) to learn the nonlinear mapping between the input images and targeting output images. To evaluate the validity of the proposed approach, it is compared with two models from the literature: synthetic CT deep convolutional neural network (sCT-DCNN) and shallow CNN, using both an institutional mammogram dataset from Mayo Clinic Arizona and a public neuroimaging dataset from the Alzheimer's Disease Neuroimaging Initiative. Experimental results show that the proposed RIED-Net outperforms the two models on both datasets significantly in terms of structural similarity index, mean absolute percent error, and peak signal-to-noise ratio.

Publication types

  • Research Support, N.I.H., Extramural
  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Algorithms
  • Databases, Factual
  • Deep Learning*
  • Humans
  • Image Processing, Computer-Assisted / methods*
  • Mammography
  • Neuroimaging