Learning Rates of Regularized Regression With Multiple Gaussian Kernels for Multi-Task Learning

IEEE Trans Neural Netw Learn Syst. 2018 Nov;29(11):5408-5418. doi: 10.1109/TNNLS.2018.2802469. Epub 2018 Mar 2.

Abstract

This paper considers a least square regularized regression algorithm for multi-task learning in a union of reproducing kernel Hilbert spaces (RKHSs) with Gaussian kernels. It is assumed that the optimal prediction function of the target task and those of related tasks are in an RKHS with the same but with unknown Gaussian kernel width. The samples for related tasks are used to select the Gaussian kernel width, and the sample for the target task is used to obtain the prediction function in the RKHS with this selected width. With an error decomposition result, a fast learning rate is obtained for the target task. The key step is to estimate the sample errors of related tasks in the union of RKHSs with Gaussian kernels. The utility of this algorithm is illustrated with one simulated data set and four real data sets. The experiment results illustrate that the underlying algorithm can result in significant improvements in prediction error when few samples of the target task and more samples of related tasks are available.

Publication types

  • Research Support, Non-U.S. Gov't