Estimation of an overall standardized mean difference in random-effects meta-analysis if the distribution of random effects departs from normal

Res Synth Methods. 2018 Sep;9(3):489-503. doi: 10.1002/jrsm.1312. Epub 2018 Jul 30.

Abstract

The random-effects model, applied in most meta-analyses nowadays, typically assumes normality of the distribution of the effect parameters. The purpose of this study was to examine the performance of various random-effects methods (standard method, Hartung's method, profile likelihood method, and bootstrapping) for computing an average effect size estimate and a confidence interval (CI) around it, when the normality assumption is not met. For comparison purposes, we also included the fixed-effect model. We manipulated a wide range of conditions, including conditions with some degree of departure from the normality assumption, using Monte Carlo simulation. To simulate realistic scenarios, we chose the manipulated conditions from a systematic review of meta-analyses on the effectiveness of psychological treatments. We compared the performance of the different methods in terms of bias and mean squared error of the average effect estimators, empirical coverage probability and width of the CIs, and variability of the standard errors. Our results suggest that random-effects methods are largely robust to departures from normality, with Hartung's profile likelihood methods yielding the best performance under suboptimal conditions.

Keywords: confidence interval; meta-analysis; overall effect size; random-effects model.

MeSH terms

  • Algorithms
  • Bias
  • Computer Simulation
  • Data Interpretation, Statistical
  • Humans
  • Likelihood Functions
  • Mental Disorders / therapy*
  • Meta-Analysis as Topic*
  • Models, Statistical*
  • Monte Carlo Method*
  • Probability
  • Psychology / methods*
  • Reproducibility of Results