Planning Beyond the Next Trial in Adaptive Experiments: A Dynamic Programming Approach

Cogn Sci. 2017 Nov;41(8):2234-2252. doi: 10.1111/cogs.12467. Epub 2016 Dec 18.

Abstract

Experimentation is at the heart of scientific inquiry. In the behavioral and neural sciences, where only a limited number of observations can often be made, it is ideal to design an experiment that leads to the rapid accumulation of information about the phenomenon under study. Adaptive experimentation has the potential to accelerate scientific progress by maximizing inferential gain in such research settings. To date, most adaptive experiments have relied on myopic, one-step-ahead strategies in which the stimulus on each trial is selected to maximize inference on the next trial only. A lingering question in the field has been how much additional benefit would be gained by optimizing beyond the next trial. A range of technical challenges has prevented this important question from being addressed adequately. This study applies dynamic programming (DP), a technique applicable for such full-horizon, "global" optimization, to model-based perceptual threshold estimation, a domain that has been a major beneficiary of adaptive methods. The results provide insight into conditions that will benefit from optimizing beyond the next trial. Implications for the use of adaptive methods in cognitive science are discussed.

Keywords: Adaptive experiments; Bayesian inference; Cognitive modeling; Dynamic programming; Perceptual threshold measurement.

MeSH terms

  • Cognition / physiology*
  • Humans
  • Models, Psychological
  • Research Design*