When decision heuristics and science collide

Psychon Bull Rev. 2014 Apr;21(2):268-82. doi: 10.3758/s13423-013-0495-z.

Abstract

The ongoing discussion among scientists about null-hypothesis significance testing and Bayesian data analysis has led to speculation about the practices and consequences of "researcher degrees of freedom." This article advances this debate by asking the broader questions that we, as scientists, should be asking: How do scientists make decisions in the course of doing research, and what is the impact of these decisions on scientific conclusions? We asked practicing scientists to collect data in a simulated research environment, and our findings show that some scientists use data collection heuristics that deviate from prescribed methodology. Monte Carlo simulations show that data collection heuristics based on p values lead to biases in estimated effect sizes and Bayes factors and to increases in both false-positive and false-negative rates, depending on the specific heuristic. We also show that using Bayesian data collection methods does not eliminate these biases. Thus, our study highlights the little appreciated fact that the process of doing science is a behavioral endeavor that can bias statistical description and inference in a manner that transcends adherence to any particular statistical framework.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adult
  • Bayes Theorem*
  • Data Collection / standards*
  • Decision Making*
  • Humans
  • Research Design / standards*
  • Science / methods
  • Science / standards*
  • Statistics as Topic / standards*