Multi input-Multi output 3D CNN for dementia severity assessment with incomplete multimodal data

Artif Intell Med. 2024 Mar:149:102774. doi: 10.1016/j.artmed.2024.102774. Epub 2024 Jan 24.

Abstract

Alzheimer's Disease is the most common cause of dementia, whose progression spans in different stages, from very mild cognitive impairment to mild and severe conditions. In clinical trials, Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) are mostly used for the early diagnosis of neurodegenerative disorders since they provide volumetric and metabolic function information of the brain, respectively. In recent years, Deep Learning (DL) has been employed in medical imaging with promising results. Moreover, the use of the deep neural networks, especially Convolutional Neural Networks (CNNs), has also enabled the development of DL-based solutions in domains characterized by the need of leveraging information coming from multiple data sources, raising the Multimodal Deep Learning (MDL). In this paper, we conduct a systematic analysis of MDL approaches for dementia severity assessment exploiting MRI and PET scans. We propose a Multi Input-Multi Output 3D CNN whose training iterations change according to the characteristic of the input as it is able to handle incomplete acquisitions, in which one image modality is missed. Experiments performed on OASIS-3 dataset show the satisfactory results of the implemented network, which outperforms approaches exploiting both single image modality and different MDL fusion techniques.

Keywords: Convolutional neural networks; Magnetic resonance images; Multimodal deep learning; Positron emission tomography.

MeSH terms

  • Alzheimer Disease* / diagnostic imaging
  • Cognitive Dysfunction* / diagnostic imaging
  • Humans
  • Magnetic Resonance Imaging / methods
  • Neural Networks, Computer
  • Positron-Emission Tomography / methods