Survey of Explainable AI Techniques in Healthcare

Sensors (Basel). 2023 Jan 5;23(2):634. doi: 10.3390/s23020634.

Abstract

Artificial intelligence (AI) with deep learning models has been widely applied in numerous domains, including medical imaging and healthcare tasks. In the medical field, any judgment or decision is fraught with risk. A doctor will carefully judge whether a patient is sick before forming a reasonable explanation based on the patient's symptoms and/or an examination. Therefore, to be a viable and accepted tool, AI needs to mimic human judgment and interpretation skills. Specifically, explainable AI (XAI) aims to explain the information behind the black-box model of deep learning that reveals how the decisions are made. This paper provides a survey of the most recent XAI techniques used in healthcare and related medical imaging applications. We summarize and categorize the XAI types, and highlight the algorithms used to increase interpretability in medical imaging topics. In addition, we focus on the challenging XAI problems in medical applications and provide guidelines to develop better interpretations of deep learning models using XAI concepts in medical image and text analysis. Furthermore, this survey provides future directions to guide developers and researchers for future prospective investigations on clinical topics, particularly on applications with medical imaging.

Keywords: deep learning; explainable AI; medical imaging; radiomics.

Publication types

  • Review

MeSH terms

  • Algorithms
  • Artificial Intelligence*
  • Humans
  • Judgment
  • Physicians*
  • Research Personnel

Grants and funding

This research was funded by NATIONAL NATURAL SCIENCE FOUNDATION grant number 82260360 and the FOREIGN YOUNG TALENTS PROGRAM QN2021033002L.