A Deep Learning Framework for Segmenting Brain Tumors Using MRI and Synthetically Generated CT Images

Sensors (Basel). 2022 Jan 11;22(2):523. doi: 10.3390/s22020523.

Abstract

Multi-modal three-dimensional (3-D) image segmentation is used in many medical applications, such as disease diagnosis, treatment planning, and image-guided surgery. Although multi-modal images provide information that no single image modality alone can provide, integrating such information to be used in segmentation is a challenging task. Numerous methods have been introduced to solve the problem of multi-modal medical image segmentation in recent years. In this paper, we propose a solution for the task of brain tumor segmentation. To this end, we first introduce a method of enhancing an existing magnetic resonance imaging (MRI) dataset by generating synthetic computed tomography (CT) images. Then, we discuss a process of systematic optimization of a convolutional neural network (CNN) architecture that uses this enhanced dataset, in order to customize it for our task. Using publicly available datasets, we show that the proposed method outperforms similar existing methods.

Keywords: brain tumor; deep learning; image fusion; medical image processing; segmentation.

MeSH terms

  • Brain Neoplasms* / diagnostic imaging
  • Deep Learning*
  • Humans
  • Image Processing, Computer-Assisted
  • Magnetic Resonance Imaging
  • Tomography, X-Ray Computed