Bayesian regularization in multiple-indicators multiple-causes models

Psychol Methods. 2023 Jul 27. doi: 10.1037/met0000594. Online ahead of print.

Abstract

Integrating regularization methods into structural equation modeling is gaining increasing popularity. The purpose of regularization is to improve variable selection, model estimation, and prediction accuracy. In this study, we aim to: (a) compare Bayesian regularization methods for exploring covariate effects in multiple-indicators multiple-causes models, (b) examine the sensitivity of results to hyperparameter settings of penalty priors, and (c) investigate prediction accuracy through cross-validation. The Bayesian regularization methods examined included: ridge, lasso, adaptive lasso, spike-and-slab prior (SSP) and its variants, and horseshoe and its variants. Sparse solutions were developed for the structural coefficient matrix that contained only a small portion of nonzero path coefficients characterizing the effects of selected covariates on the latent variable. Results from the simulation study showed that compared to diffuse priors, penalty priors were advantageous in handling small sample sizes and collinearity among covariates. Priors with only the global penalty (ridge and lasso) yielded higher model convergence rates and power, whereas priors with both the global and local penalties (horseshoe and SSP) provided more accurate parameter estimates for medium and large covariate effects. The horseshoe and SSP improved accuracy in predicting factor scores, while achieving more parsimonious models. (PsycInfo Database Record (c) 2023 APA, all rights reserved).