Posterior Averaging Information Criterion

Entropy (Basel). 2023 Mar 7;25(3):468. doi: 10.3390/e25030468.

Abstract

We propose a new model selection method, named the posterior averaging information criterion, for Bayesian model assessment to minimize the risk of predicting independent future observations. The theoretical foundation is built on the Kullback-Leibler divergence to quantify the similarity between the proposed candidate model and the underlying true model. From a Bayesian perspective, our method evaluates the candidate models over the entire posterior distribution in terms of predicting a future independent observation. Without assuming that the true distribution is contained in the candidate models, the new criterion is developed by correcting the asymptotic bias of the posterior mean of the in-sample log-likelihood against out-of-sample log-likelihood, and can be generally applied even for Bayesian models with degenerate non-informative priors. Simulations in both normal and binomial settings demonstrate superior small sample performance.

Keywords: Bayesian modeling; Kullback–Leibler divergence; expected out-of-sample likelihood; misspecified model; predictive model selection.

Grants and funding

This research was funded part by Columbia University GSAS Faculty Fellowship and NIH/NCI Grant CA100632.