Dynamical analysis of contrastive divergence learning: Restricted Boltzmann machines with Gaussian visible units

Neural Netw. 2016 Jul:79:78-87. doi: 10.1016/j.neunet.2016.03.013. Epub 2016 Apr 12.

Abstract

The restricted Boltzmann machine (RBM) is an essential constituent of deep learning, but it is hard to train by using maximum likelihood (ML) learning, which minimizes the Kullback-Leibler (KL) divergence. Instead, contrastive divergence (CD) learning has been developed as an approximation of ML learning and widely used in practice. To clarify the performance of CD learning, in this paper, we analytically derive the fixed points where ML and CDn learning rules converge in two types of RBMs: one with Gaussian visible and Gaussian hidden units and the other with Gaussian visible and Bernoulli hidden units. In addition, we analyze the stability of the fixed points. As a result, we find that the stable points of CDn learning rule coincide with those of ML learning rule in a Gaussian-Gaussian RBM. We also reveal that larger principal components of the input data are extracted at the stable points. Moreover, in a Gaussian-Bernoulli RBM, we find that both ML and CDn learning can extract independent components at one of stable points. Our analysis demonstrates that the same feature components as those extracted by ML learning are extracted simply by performing CD1 learning. Expanding this study should elucidate the specific solutions obtained by CD learning in other types of RBMs or in deep networks.

Keywords: Component analysis; Contrastive divergence; Deep learning; Restricted Boltzmann machine; Stability of learning algorithms.

MeSH terms

  • Algorithms
  • Learning
  • Machine Learning*
  • Neural Networks, Computer*
  • Normal Distribution*
  • Probability Learning