The Logic of Generalization From Systematic Reviews and Meta-Analyses of Impact Evaluations

Eval Rev. 2024 Jun;48(3):427-460. doi: 10.1177/0193841X241227481. Epub 2024 Jan 23.

Abstract

Systematic reviews and meta-analyses are viewed as potent tools for generalized causal inference. These reviews are routinely used to inform decision makers about expected effects of interventions. However, the logic of generalization from research reviews to diverse policy and practice contexts is not well developed. Building on sampling theory, concerns about epistemic uncertainty, and principles of generalized causal inference, this article presents a pragmatic approach to generalizability assessment for use with systematic reviews and meta-analyses. This approach is applied to two systematic reviews and meta-analyses of effects of "evidence-based" psychosocial interventions for youth and families. Evaluations included in systematic reviews are not necessarily representative of populations and treatments of interest. Generalizability of results is limited by high risks of bias, uncertain estimates, and insufficient descriptive data from impact evaluations. Systematic reviews and meta-analyses can be used to test generalizability claims, explore heterogeneity, and identify potential moderators of effects. These reviews can also produce pooled estimates that are not representative of any larger sets of studies, programs, or people. Further work is needed to improve the conduct and reporting of impact evaluations and systematic reviews, and to develop practical approaches to generalizability assessment and guide applications of interventions in diverse policy and practice contexts.

Keywords: external validity; generalizability; meta-analysis; proximal similarity; systematic review.

MeSH terms

  • Adolescent
  • Generalization, Psychological*
  • Humans
  • Logic*
  • Systematic Reviews as Topic