Wang-Landau algorithm as stochastic optimization and its acceleration

Phys Rev E. 2020 Mar;101(3-1):033301. doi: 10.1103/PhysRevE.101.033301.

Abstract

We show that the Wang-Landau algorithm can be formulated as a stochastic gradient descent algorithm minimizing a smooth and convex objective function, of which the gradient is estimated using Markov chain Monte Carlo iterations. The optimization formulation provides us another way to establish the convergence rate of the Wang-Landau algorithm, by exploiting the fact that almost surely the density estimates (on the logarithmic scale) remain in a compact set, upon which the objective function is strongly convex. The optimization viewpoint motivates us to improve the efficiency of the Wang-Landau algorithm using popular tools including the momentum method and the adaptive learning rate method. We demonstrate the accelerated Wang-Landau algorithm on a two-dimensional Ising model and a two-dimensional ten-state Potts model.