Gaussian Process Regression With Interpretable Sample-Wise Feature Weights

IEEE Trans Neural Netw Learn Syst. 2023 Sep;34(9):5789-5803. doi: 10.1109/TNNLS.2021.3131234. Epub 2023 Sep 1.

Abstract

Gaussian process regression (GPR) is a fundamental model used in machine learning (ML). Due to its accurate prediction with uncertainty and versatility in handling various data structures via kernels, GPR has been successfully used in various applications. However, in GPR, how the features of an input contribute to its prediction cannot be interpreted. Here, we propose GPR with local explanation, which reveals the feature contributions to the prediction of each sample while maintaining the predictive performance of GPR. In the proposed model, both the prediction and explanation for each sample are performed using an easy-to-interpret locally linear model. The weight vector of the locally linear model is assumed to be generated from multivariate Gaussian process priors. The hyperparameters of the proposed models are estimated by maximizing the marginal likelihood. For a new test sample, the proposed model can predict the values of its target variable and weight vector, as well as their uncertainties, in a closed form. Experimental results on various benchmark datasets verify that the proposed model can achieve predictive performance comparable to those of GPR and superior to that of existing interpretable models and can achieve higher interpretability than them, both quantitatively and qualitatively.