Explainable, trustworthy, and ethical machine learning for healthcare: A survey

Comput Biol Med. 2022 Oct:149:106043. doi: 10.1016/j.compbiomed.2022.106043. Epub 2022 Sep 7.

Abstract

With the advent of machine learning (ML) and deep learning (DL) empowered applications for critical applications like healthcare, the questions about liability, trust, and interpretability of their outputs are raising. The black-box nature of various DL models is a roadblock to clinical utilization. Therefore, to gain the trust of clinicians and patients, we need to provide explanations about the decisions of models. With the promise of enhancing the trust and transparency of black-box models, researchers are in the phase of maturing the field of eXplainable ML (XML). In this paper, we provided a comprehensive review of explainable and interpretable ML techniques for various healthcare applications. Along with highlighting security, safety, and robustness challenges that hinder the trustworthiness of ML, we also discussed the ethical issues arising because of the use of ML/DL for healthcare. We also describe how explainable and trustworthy ML can resolve all these ethical problems. Finally, we elaborate on the limitations of existing approaches and highlight various open research problems that require further development.

Keywords: Explainable machine learning; Healthcare; Interpretable machine learning; Trustworthiness.

Publication types

  • Review
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Delivery of Health Care
  • Health Facilities*
  • Humans
  • Machine Learning*
  • Surveys and Questionnaires