Uncertainty, Evidence, and the Integration of Machine Learning into Medical Practice

J Med Philos. 2023 Feb 17;48(1):84-97. doi: 10.1093/jmp/jhac034.

Abstract

In light of recent advances in machine learning for medical applications, the automation of medical diagnostics is imminent. That said, before machine learning algorithms find their way into clinical practice, various problems at the epistemic level need to be overcome. In this paper, we discuss different sources of uncertainty arising for clinicians trying to evaluate the trustworthiness of algorithmic evidence when making diagnostic judgments. Thereby, we examine many of the limitations of current machine learning algorithms (with deep learning in particular) and highlight their relevance for medical diagnostics. Among the problems we inspect are the theoretical foundations of deep learning (which are not yet adequately understood), the opacity of algorithmic decisions, and the vulnerabilities of machine learning models, as well as concerns regarding the quality of medical data used to train the models. Building on this, we discuss different desiderata for an uncertainty amelioration strategy that ensures that the integration of machine learning into clinical settings proves to be medically beneficial in a meaningful way.

Keywords: black box problem; evidence; machine learning; medical diagnosis; uncertainty.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms*
  • Humans
  • Machine Learning*
  • Uncertainty