Explainability of deep learning models in medical video analysis: a survey

PeerJ Comput Sci. 2023 Mar 14:9:e1253. doi: 10.7717/peerj-cs.1253. eCollection 2023.

Abstract

Deep learning methods have proven to be effective for multiple diagnostic tasks in medicine and have been performing significantly better in comparison to other traditional machine learning methods. However, the black-box nature of deep neural networks has restricted their use in real-world applications, especially in healthcare. Therefore, explainability of the machine learning models, which focuses on providing of the comprehensible explanations of model outputs, may affect the possibility of adoption of such models in clinical use. There are various studies reviewing approaches to explainability in multiple domains. This article provides a review of the current approaches and applications of explainable deep learning for a specific area of medical data analysis-medical video processing tasks. The article introduces the field of explainable AI and summarizes the most important requirements for explainability in medical applications. Subsequently, we provide an overview of existing methods, evaluation metrics and focus more on those that can be applied to analytical tasks involving the processing of video data in the medical domain. Finally we identify some of the open research issues in the analysed area.

Keywords: Deep learning; Explainability; Explainable AI; Interpretability; Medical video analysis.

Grants and funding

This work was supported by the Slovak Research and Development Agency under the contract No. APVV-20-0232 and contract No. APVV-17-0550 and by the Slovak VEGA research grant No. 1/0685/21. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.