Patient safety and quality improvement: Ethical principles for a regulatory approach to bias in healthcare machine learning

J Am Med Inform Assoc. 2020 Dec 9;27(12):2024-2027. doi: 10.1093/jamia/ocaa085.

Abstract

Accumulating evidence demonstrates the impact of bias that reflects social inequality on the performance of machine learning (ML) models in health care. Given their intended placement within healthcare decision making more broadly, ML tools require attention to adequately quantify the impact of bias and reduce its potential to exacerbate inequalities. We suggest that taking a patient safety and quality improvement approach to bias can support the quantification of bias-related effects on ML. Drawing from the ethical principles underpinning these approaches, we argue that patient safety and quality improvement lenses support the quantification of relevant performance metrics, in order to minimize harm while promoting accountability, justice, and transparency. We identify specific methods for operationalizing these principles with the goal of attending to bias to support better decision making in light of controllable and uncontrollable factors.

Keywords: healthcare delivery; machine learning; patient safety; quality improvement; systematic bias.

MeSH terms

  • Artificial Intelligence / ethics*
  • Data Collection
  • Government Regulation
  • Healthcare Disparities
  • Humans
  • Patient Safety*
  • Prejudice*
  • Quality Improvement*
  • Social Determinants of Health