Model Selection and Model Averaging for Mixed-Effects Models with Crossed Random Effects for Subjects and Items

Multivariate Behav Res. 2022 Jul-Aug;57(4):603-619. doi: 10.1080/00273171.2021.1889946. Epub 2021 Feb 26.

Abstract

A good deal of experimental research is characterized by the presence of random effects on subjects and items. A standard modeling approach that includes such sources of variability is the mixed-effects models (MEMs) with crossed random effects. However, under-parameterizing or over-parameterizing the random structure of MEMs bias the estimations of the Standard Errors (SEs) of fixed effects. In this simulation study, we examined two different but complementary perspectives: model selection with likelihood-ratio tests, AIC, and BIC; and model averaging with Akaike weights. Results showed that true model selection was constant across the different strategies examined (including ML and REML estimators). However, sample size and variance of random slopes were found to explain true model selection and SE bias of fixed effects. No relevant differences in SE bias were found for model selection and model averaging. Sample size and variance of random slopes interacted with the estimator to explain SE bias. Only the within-subjects effect showed significant underestimation of SEs with smaller number of items and larger item random slopes. SE bias was higher for ML than REML, but the variability of SE bias was the opposite. Such variability can be translated into high rates of unacceptable bias in many replications.

Keywords: ML; Mixed-effects models; REML; crossed random effects; model averaging; model selection; random slopes.

MeSH terms

  • Bias
  • Computer Simulation
  • Humans
  • Likelihood Functions*
  • Sample Size