Sparse Interpretation of Graph Convolutional Networks for Multi-Modal Diagnosis of Alzheimer's Disease

Med Image Comput Comput Assist Interv. 2022 Sep:13438:469-478. doi: 10.1007/978-3-031-16452-1_45. Epub 2022 Sep 16.

Abstract

The interconnected quality of brain regions in neurological disease has immense importance for the development of biomarkers and diagnostics. While Graph Convolutional Network (GCN) methods are fundamentally compatible with discovering the connected role of brain regions in disease, current methods apply limited consideration for node features and their connectivity in brain network analysis. In this paper, we propose a sparse interpretable GCN framework (SGCN) for the identification and classification of Alzheimer's disease (AD) using brain imaging data with multiple modalities. SGCN applies an attention mechanism with sparsity to identify the most discriminative subgraph structure and important node features for the detection of AD. The model learns the sparse importance probabilities for each node feature and edge with entropy, 1, and mutual information regularization. We then utilized this information to find signature regions of interest (ROIs), and emphasize the disease-specific brain network connections by detecting the significant difference of connectives between regions in healthy control (HC), and AD groups. We evaluated SGCN on the ADNI database with imaging data from three modalities, including VBM-MRI, FDG-PET, and AV45-PET, and observed that the important probabilities it learned are effective for disease status identification and the sparse interpretability of disease-specific ROI features and connections. The salient ROIs detected and the most discriminative network connections interpreted by our method show a high correspondence with previous neuroimaging evidence associated with AD.

Keywords: Graph convolutional network; Multi-Modality; Neuroimaging; Sparse interpretation.