GLACIER: GLASS-BOX TRANSFORMER FOR INTERPRETABLE DYNAMIC NEUROIMAGING

Proc IEEE Int Conf Acoust Speech Signal Process. 2023 Jun:2023:10.1109/icassp49357.2023.10097126. doi: 10.1109/icassp49357.2023.10097126. Epub 2023 May 5.

Abstract

Deep learning models can perform as well or better than humans in many tasks, especially vision related. Almost exclusively, these models are used to perform classification or prediction. However, deep learning models are usually of black-box nature, and it is often difficult to interpret the model or the features. The lack of interpretability causes a restrain from applying deep learning to fields such as neuroimaging, where the results must be transparent, and interpretable. Therefore, we present a 'glass-box' deep learning model and apply it to the field of neuroimaging. Our model mixes spatial and temporal dimensions in succession to estimate dynamic connectivity between the brain's intrinsic networks. The interpretable connectivity matrices produced by our model result in beating state-of-the-art models on many tasks using multiple functional MRI datasets. More importantly, our model estimates task-based flexible connectivity matrices, unlike static methods such as Pearson's correlation coefficients.

Keywords: Interpretable DL; fMRI; neuroimaging.