Foresight for ethical AI

Front Artif Intell. 2023 Jul 20:6:1143907. doi: 10.3389/frai.2023.1143907. eCollection 2023.

Abstract

There is growing expectation that artificial intelligence (AI) developers foresee and mitigate harms that might result from their creations; however, this is exceptionally difficult given the prevalence of emergent behaviors that occur when integrating AI into complex sociotechnical systems. We argue that Naturalistic Decision Making (NDM) principles, models, and tools are well-suited to tackling this challenge. Already applied in high-consequence domains, NDM tools such as the premortem, and others, have been shown to uncover a reasonable set of risks of underlying factors that would lead to ethical harms. Such NDM tools have already been used to develop AI that is more trustworthy and resilient, and can help avoid unintended consequences of AI built with noble intentions. We present predictive policing algorithms as a use case, highlighting various factors that led to ethical harms and how NDM tools could help foresee and mitigate such harms.

Keywords: artificial intelligence; ethics; foresight; naturalistic decision making; policy; premortem.

Grants and funding

Funds for Open Access Publication Fees were from the Social & Behavioral Sciences Department [L175] of the MITRE Corporation.