Optimal criteria and their asymptotic form for data selection in data-driven reduced-order modelling with Gaussian process regression

Philos Trans A Math Phys Eng Sci. 2022 Aug 8;380(2229):20210197. doi: 10.1098/rsta.2021.0197. Epub 2022 Jun 20.

Abstract

We derive criteria for the selection of datapoints used for data-driven reduced-order modelling and other areas of supervised learning based on Gaussian process regression (GPR). While this is a well-studied area in the fields of active learning and optimal experimental design, most criteria in the literature are empirical. Here we introduce an optimality condition for the selection of a new input defined as the minimizer of the distance between the approximated output probability density function (pdf) of the reduced-order model and the exact one. Given that the exact pdf is unknown, we define the selection criterion as the supremum over the unit sphere of the native Hilbert space for the GPR. The resulting selection criterion, however, has a form that is difficult to compute. We combine results from GPR theory and asymptotic analysis to derive a computable form of the defined optimality criterion that is valid in the limit of small predictive variance. The derived asymptotic form of the selection criterion leads to convergence of the GPR model that guarantees a balanced distribution of data resources between probable and large-deviation outputs, resulting in an effective way of sampling towards data-driven reduced-order modelling. This article is part of the theme issue 'Data-driven prediction in dynamical systems'.

Keywords: Bayesian regression; active learning; data-driven modelling; optimal experimental design; optimal sampling.