Generating post-hoc explanation from deep neural networks for multi-modal medical image analysis tasks

MethodsX. 2023 Jan 10:10:102009. doi: 10.1016/j.mex.2023.102009. eCollection 2023.

Abstract

Explaining model decisions from medical image inputs is necessary for deploying deep neural network (DNN) based models as clinical decision assistants. The acquisition of multi-modal medical images is pervasive in practice for supporting the clinical decision-making process. Multi-modal images capture different aspects of the same underlying regions of interest. Explaining DNN decisions on multi-modal medical images is thus a clinically important problem. Our methods adopt commonly-used post-hoc artificial intelligence feature attribution methods to explain DNN decisions on multi-modal medical images, including two categories of gradient- and perturbation-based methods. • Gradient-based explanation methods - such as Guided BackProp, DeepLift - utilize the gradient signal to estimate the feature importance for model prediction. • Perturbation-based methods - such as occlusion, LIME, kernel SHAP - utilize the input-output sampling pairs to estimate the feature importance. • We describe the implementation details on how to make the methods work for multi-modal image input, and make the implementation code available.

Keywords: Explainable artificial intelligence; Interpretable machine learning; Medical image analysis; Multi-modal medical image; Post-hoc explanation; Post-hoc feature attribution map explanation methods.