Computational Goals, Values and Decision-Making

Sci Eng Ethics. 2020 Oct;26(5):2487-2495. doi: 10.1007/s11948-020-00244-y.

Abstract

Considering the popular framing of an artificial intelligence as a rational agent that always seeks to maximise its expected utility, referred to as its goal, one of the features attributed to such rational agents is that they will never select an action which will change their goal. Therefore, if such an agent is to be friendly towards humanity, one argument goes, we must understand how to specify this friendliness in terms of a utility function. Wolfhart Totschnig (Fully Autonomous AI, Science and Engineering Ethics, 2020), argues in contrast that a fully autonomous agent will have the ability to change its utility function and will do so guided by its values. This commentary examines computational accounts of goals, values and decision-making. It rejects the idea that a rational agent will never select an action that changes its goal but also argues that an artificial intelligence is unlikely to be purely rational in terms of always acting to maximise a utility function. It nevertheless also challenges the idea that an agent which does not change its goal cannot be considered fully autonomous. It does agree that values are an important component of decision-making and explores a number of reasons why.

MeSH terms

  • Artificial Intelligence*
  • Goals*
  • Humans
  • Motivation