Low-rank human-like agents are trusted more and blamed less in human-autonomy teaming

Front Artif Intell. 2024 Apr 29:7:1273350. doi: 10.3389/frai.2024.1273350. eCollection 2024.

Abstract

If humans are to team with artificial teammates, factors that influence trust and shared accountability must be considered when designing agents. This study investigates the influence of anthropomorphism, rank, decision cost, and task difficulty on trust in human-autonomous teams (HAT) and how blame is apportioned if shared tasks fail. Participants (N = 31) completed repeated trials with an artificial teammate using a low-fidelity variation of an air-traffic control game. We manipulated anthropomorphism (human-like or machine-like), military rank of artificial teammates using three-star (superiors), two-star (peers), or one-star (subordinate) agents, the perceived payload of vehicles with people or supplies onboard, and task difficulty with easy or hard missions using a within-subject design. A behavioural measure of trust was inferred when participants accepted agent recommendations, and a measure of no trust when recommendations were rejected or ignored. We analysed the data for trust using binomial logistic regression. After each trial, blame was apportioned using a 2-item scale and analysed using a one-way repeated measures ANOVA. A post-experiment questionnaire obtained participants' power distance orientation using a seven-item scale. Possible power-related effects on trust and blame apportioning are discussed. Our findings suggest that artificial agents with higher levels of anthropomorphism and lower levels of rank increased trust and shared accountability, with human team members accepting more blame for team failures.

Keywords: anthropomorphism; blame; human-autonomy teaming; power distance orientation; shared tasks; status; trust.

Grants and funding

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. The software development of MAHVS was funded by Defence Science Technology Group (DSTG), and in particular DSTG’s Human Autonomy Teaming discipline.