Scaling up psychology via Scientific Regret Minimization

Proc Natl Acad Sci U S A. 2020 Apr 21;117(16):8825-8835. doi: 10.1073/pnas.1915841117. Epub 2020 Apr 2.

Abstract

Do large datasets provide value to psychologists? Without a systematic methodology for working with such datasets, there is a valid concern that analyses will produce noise artifacts rather than true effects. In this paper, we offer a way to enable researchers to systematically build models and identify novel phenomena in large datasets. One traditional approach is to analyze the residuals of models-the biggest errors they make in predicting the data-to discover what might be missing from those models. However, once a dataset is sufficiently large, machine learning algorithms approximate the true underlying function better than the data, suggesting, instead, that the predictions of these data-driven models should be used to guide model building. We call this approach "Scientific Regret Minimization" (SRM), as it focuses on minimizing errors for cases that we know should have been predictable. We apply this exploratory method on a subset of the Moral Machine dataset, a public collection of roughly 40 million moral decisions. Using SRM, we find that incorporating a set of deontological principles that capture dimensions along which groups of agents can vary (e.g., sex and age) improves a computational model of human moral judgment. Furthermore, we are able to identify and independently validate three interesting moral phenomena: criminal dehumanization, age of responsibility, and asymmetric notions of responsibility.

Keywords: decision-making; machine learning; moral psychology; scientific regret.

Publication types

  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Behavioral Sciences / methods*
  • Computer Simulation
  • Datasets as Topic
  • Decision Making*
  • Dehumanization
  • Feasibility Studies
  • Female
  • Humans
  • Judgment*
  • Machine Learning
  • Male
  • Models, Psychological*
  • Morals*