Investigating the understandability of XAI methods for enhanced user experience: When Bayesian network users became detectives

Artif Intell Med. 2022 Dec:134:102438. doi: 10.1016/j.artmed.2022.102438. Epub 2022 Nov 9.

Abstract

In the medical domain, the uptake of an AI tool crucially depends on whether clinicians are confident that they understand the tool. Bayesian networks are popular AI models in the medical domain, yet, explaining predictions from Bayesian networks to physicians and patients is non-trivial. Various explanation methods for Bayesian network inference have appeared in literature, focusing on different aspects of the underlying reasoning. While there has been a lot of technical research, there is little known about the actual user experience of such methods. In this paper, we present results of a study in which four different explanation approaches were evaluated through a survey by questioning a group of human participants on their perceived understanding in order to gain insights about their user experience.

Keywords: Bayesian networks; Explainable AI; User experience.

MeSH terms

  • Bayes Theorem
  • Humans
  • Physicians*