Risk factors, confounding, and the illusion of statistical control

Psychosom Med. 2004 Nov-Dec;66(6):868-75. doi: 10.1097/01.psy.0000140008.70959.41.

Abstract

When experimental designs are premature, impractical, or impossible, researchers must rely on statistical methods to adjust for potentially confounding effects. Such procedures, however, are quite fallible. We examine several errors that often follow the use of statistical adjustment. The first is inferring a factor is causal because it predicts an outcome even after "statistical control" for other factors. This inference is fallacious when (as usual) such control involves removing the linear contribution of imperfectly measured variables, or when some confounders remain unmeasured. The converse fallacy is inferring a factor is not causally important because its association with the outcome is attenuated or eliminated by the inclusion of covariates in the adjustment process. This attenuation may only reflect that the covariates treated as confounders are actually mediators (intermediates) and critical to the causal chain from the study factor to the study outcome. Other problems arise due to mismeasurement of the study factor or outcome, or because these study variables are only proxies for underlying constructs. Statistical adjustment serves a useful function, but it cannot transform observational studies into natural experiments, and involves far more subjective judgment than many users realize.

Publication types

  • Review

MeSH terms

  • Bias
  • Confounding Factors, Epidemiologic*
  • Data Interpretation, Statistical*
  • False Positive Reactions
  • Health Behavior
  • Humans
  • Observation / methods
  • Outcome Assessment, Health Care
  • Randomized Controlled Trials as Topic / statistics & numerical data
  • Reproducibility of Results
  • Research Design / standards*
  • Risk Factors