Unsupervised-learning-based method for chest MRI-CT transformation using structure constrained unsupervised generative attention networks

Sci Rep. 2022 Jun 30;12(1):11090. doi: 10.1038/s41598-022-14677-x.

Abstract

The integrated positron emission tomography/magnetic resonance imaging (PET/MRI) scanner simultaneously acquires metabolic information via PET and morphological information using MRI. However, attenuation correction, which is necessary for quantitative PET evaluation, is difficult as it requires the generation of attenuation-correction maps from MRI, which has no direct relationship with the gamma-ray attenuation information. MRI-based bone tissue segmentation is potentially available for attenuation correction in relatively rigid and fixed organs such as the head and pelvis regions. However, this is challenging for the chest region because of respiratory and cardiac motions in the chest, its anatomically complicated structure, and the thin bone cortex. We propose a new method using unsupervised generative attentional networks with adaptive layer-instance normalisation for image-to-image translation (U-GAT-IT), which specialised in unpaired image transformation based on attention maps for image transformation. We added the modality-independent neighbourhood descriptor (MIND) to the loss of U-GAT-IT to guarantee anatomical consistency in the image transformation between different domains. Our proposed method obtained a synthesised computed tomography of the chest. Experimental results showed that our method outperforms current approaches. The study findings suggest the possibility of synthesising clinically acceptable computed tomography images from chest MRI with minimal changes in anatomical structures without human annotation.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Humans
  • Image Processing, Computer-Assisted* / methods
  • Magnetic Resonance Imaging* / methods
  • Pelvis
  • Positron-Emission Tomography / methods
  • Tomography, X-Ray Computed