An efficient sampling algorithm with adaptations for Bayesian variable selection

Neural Netw. 2015 Jan:61:22-31. doi: 10.1016/j.neunet.2014.09.010. Epub 2014 Oct 7.

Abstract

In Bayesian variable selection, indicator model selection (IMS) is a class of well-known sampling algorithms, which has been used in various models. The IMS is a class of methods that uses pseudo-priors and it contains specific methods such as Gibbs variable selection (GVS) and Kuo and Mallick's (KM) method. However, the efficiency of the IMS strongly depends on the parameters of a proposal distribution and the pseudo-priors. Specifically, the GVS determines their parameters based on a pilot run for a full model and the KM method sets their parameters as those of priors, which often leads to slow mixings of them. In this paper, we propose an algorithm that adapts the parameters of the IMS during running. The parameters obtained on the fly provide an appropriate proposal distribution and pseudo-priors, which improve the mixing of the algorithm. We also prove the convergence theorem of the proposed algorithm, and confirm that the algorithm is more efficient than the conventional algorithms by experiments of the Bayesian variable selection.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms*
  • Bayes Theorem
  • Models, Theoretical*