An incremental explanation of inference in Bayesian networks for increasing model trustworthiness and supporting clinical decision making

Artif Intell Med. 2020 Mar:103:101812. doi: 10.1016/j.artmed.2020.101812. Epub 2020 Jan 31.

Abstract

Various AI models are increasingly being considered as part of clinical decision-support tools. However, the trustworthiness of such models is rarely considered. Clinicians are more likely to use a model if they can understand and trust its predictions. Key to this is if its underlying reasoning can be explained. A Bayesian network (BN) model has the advantage that it is not a black-box and its reasoning can be explained. In this paper, we propose an incremental explanation of inference that can be applied to 'hybrid' BNs, i.e. those that contain both discrete and continuous nodes. The key questions that we answer are: (1) which important evidence supports or contradicts the prediction, and (2) through which intermediate variables does the information flow. The explanation is illustrated using a real clinical case study. A small evaluation study is also conducted.

Keywords: Bayesian networks; Decision making; Explanation of reasoning; Trust.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms*
  • Bayes Theorem*
  • Decision Support Systems, Clinical / organization & administration*
  • Decision Support Systems, Clinical / standards
  • Humans
  • Markov Chains