Transformer-Based T2-weighted MRI Synthesis from T1-weighted Images

Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul:2022:5062-5065. doi: 10.1109/EMBC48229.2022.9871183.

Abstract

Multi-modality magnetic resonance (MR) images provide complementary information for disease diagnoses. However, modality missing is quite usual in real-life clinical practice. Current methods usually employ convolution-based generative adversarial network (GAN) or its variants to synthesize the missing modality. With the development of vision transformer, we explore its application in the MRI modality synthesis task in this work. We propose a novel supervised deep learning method for synthesizing a missing modality, making use of a transformer-based encoder. Specifically, a model is trained for translating 2D MR images from T1-weighted to T2-weighted based on conditional GAN (cGAN). We replace the encoder with transformer and input adjacent slices to enrich spatial prior knowledge. Experimental results on a private dataset and a public dataset demonstrate that our proposed model outperforms state-of-the-art supervised methods for MR image synthesis, both quantitatively and qualitatively. Clinical relevance- This work proposes a method to synthesize T2-weighted images from T1-weighted ones to address the modality missing issue in MRI.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Magnetic Resonance Imaging* / methods