The human cost of ethical artificial intelligence

Brain Struct Funct. 2023 Jul;228(6):1365-1369. doi: 10.1007/s00429-023-02662-7. Epub 2023 Jun 23.

Abstract

Foundational models such as ChatGPT critically depend on vast data scales the internet uniquely enables. This implies exposure to material varying widely in logical sense, factual fidelity, moral value, and even legal status. Whereas data scaling is a technical challenge, soluble with greater computational resource, complex semantic filtering cannot be performed reliably without human intervention: the self-supervision that makes foundational models possible at least in part presupposes the abilities they seek to acquire. This unavoidably introduces the need for large-scale human supervision-not just of training input but also model output-and imbues any model with subjectivity reflecting the beliefs of its creator. The pressure to minimize the cost of the former is in direct conflict with the pressure to maximise the quality of the latter. Moreover, it is unclear how complex semantics, especially in the realm of the moral, could ever be reduced to an objective function any machine could plausibly maximise. We suggest the development of foundational models necessitates urgent innovation in quantitative ethics and outline possible avenues for its realisation.

Keywords: Artificial intelligence; Ethical modelling; Philosophy and ethics; Policy.

Publication types

  • Letter

MeSH terms

  • Artificial Intelligence*
  • Humans
  • Logic
  • Morals*
  • Semantics