Knot selection in sparse Gaussian processes with a variational objective function

Stat Anal Data Min. 2020 Aug;13(4):324-336. doi: 10.1002/sam.11459. Epub 2020 Apr 20.

Abstract

Sparse, knot-based Gaussian processes have enjoyed considerable success as scalable approximations of full Gaussian processes. Certain sparse models can be derived through specific variational approximations to the true posterior, and knots can be selected to minimize the Kullback-Leibler divergence between the approximate and true posterior. While this has been a successful approach, simultaneous optimization of knots can be slow due to the number of parameters being optimized. Furthermore, there have been few proposed methods for selecting the number of knots, and no experimental results exist in the literature. We propose using a one-at-a-time knot selection algorithm based on Bayesian optimization to select the number and locations of knots. We showcase the competitive performance of this method relative to optimization of knots simultaneously on three benchmark datasets, but at a fraction of the computational cost.

Keywords: knot selection; machine learning; nonparametric regression; sparse Gaussian processes; variational inference.