Intelligent decision support in medical triage: are people robust to biased advice?

J Public Health (Oxf). 2023 Aug 28;45(3):689-696. doi: 10.1093/pubmed/fdad005.

Abstract

Background: Intelligent artificial agents ('agents') have emerged in various domains of human society (healthcare, legal, social). Since using intelligent agents can lead to biases, a common proposed solution is to keep the human in the loop. Will this be enough to ensure unbiased decision making?

Methods: To address this question, an experimental testbed was developed in which a human participant and an agent collaboratively conduct triage on patients during a pandemic crisis. The agent uses data to support the human by providing advice and extra information about the patients. In one condition, the agent provided sound advice; the agent in the other condition gave biased advice. The research question was whether participants neutralized bias from the biased artificial agent.

Results: Although it was an exploratory study, the data suggest that human participants may not be sufficiently in control to correct the agent's bias.

Conclusions: This research shows how important it is to design and test for human control in concrete human-machine collaboration contexts. It suggests that insufficient human control can potentially result in people being unable to detect biases in machines and thus unable to prevent machine biases from affecting decisions.

Keywords: emergency care; ethics; health intelligence.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Artificial Intelligence*
  • Decision Support Systems, Clinical*
  • Humans
  • Triage*