Detecting Conditional Dependence Using Flexible Bayesian Latent Class Analysis

Front Psychol. 2020 Aug 6:11:1987. doi: 10.3389/fpsyg.2020.01987. eCollection 2020.

Abstract

A fundamental assumption underlying latent class analysis (LCA) is that class indicators are conditionally independent of each other, given latent class membership. Bayesian LCA enables researchers to detect and accommodate violations of this assumption by estimating any number of correlations among indicators with proper prior distributions. However, little is known about how the choice of prior may affect the performance of Bayesian LCA. This article presents a Monte Carlo simulation study that investigates (1) the utility of priors in a range of prior variances (i.e., strongly non-informative to strongly informative priors) in terms of Type I error and power for detecting conditional dependence and (2) the influence of imposing approximate independence on model fit of Bayesian LCA. Simulation results favored the use of a weakly informative prior with large variance-model fit (posterior predictive p-value) was always satisfactory when the class indicators were either independent or dependent. Based on the current findings and the additional literature, this article offers methodological guidelines and suggestions for applied researchers.

Keywords: Bayesian latent class analysis; approximate independence; conditional dependence; model fit; prior variance.