Something's Fishy About It: How Opinion Congeniality and Explainability Affect Motivated Attribution to Artificial Intelligence Versus Human Comment Moderators

Cyberpsychol Behav Soc Netw. 2022 Aug;25(8):496-503. doi: 10.1089/cyber.2021.0347. Epub 2022 Jun 20.

Abstract

An online experiment (N = 384) examined when and how the identity of the comment moderator (artificial intelligence [AI] vs. human) on a news website affects the extent to which individuals (a) suspect political motives for comment removal and (b) believe in the AI heuristic ("AI is objective, neutral, accurate, and fair"). Specifically, we investigated how the provision of an explanation for comment removal (none vs. real vs. placebic), and opinion congeniality between the remaining comments and the user's opinion (uncongenial vs. congenial) qualify social responses to AI. Results showed that news users were more suspicious of political motives for an AI (vs. human) moderator's comment removal (a) when the remaining comments were uncongenial, and (b) when no explanation was offered for deleted comments. Providing a real explanation (vs. none) attenuated participants' suspicion of political motives behind comment removal, but only for the AI moderator. When AI moderated the comments section, the exposure to congenial (vs. uncongenial) comments led participants to endorse the AI heuristic more strongly, but only in the absence of an explanation for comment removal. By contrast, the participants' belief in AI heuristic was stronger when a human moderator preserved uncongenial (vs. congenial) comments. Apparently, they considered AI as a viable alternative to a human moderator whose performance was unsatisfactory.

Keywords: AI; comment moderation; explainability; human-AI communication; machine heuristic.

MeSH terms

  • Affect
  • Artificial Intelligence*
  • Attitude*
  • Heuristics
  • Humans
  • Motivation