Generalization Bounds Derived IPM-Based Regularization for Domain Adaptation

Comput Intell Neurosci. 2016:2016:7046563. doi: 10.1155/2016/7046563. Epub 2015 Dec 27.

Abstract

Domain adaptation has received much attention as a major form of transfer learning. One issue that should be considered in domain adaptation is the gap between source domain and target domain. In order to improve the generalization ability of domain adaption methods, we proposed a framework for domain adaptation combining source and target data, with a new regularizer which takes generalization bounds into account. This regularization term considers integral probability metric (IPM) as the distance between the source domain and the target domain and thus can bound up the testing error of an existing predictor from the formula. Since the computation of IPM only involves two distributions, this generalization term is independent with specific classifiers. With popular learning models, the empirical risk minimization is expressed as a general convex optimization problem and thus can be solved effectively by existing tools. Empirical studies on synthetic data for regression and real-world data for classification show the effectiveness of this method.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Artificial Intelligence*
  • Generalization, Psychological / physiology*
  • Humans
  • Models, Theoretical*
  • Probability*
  • Regression Analysis
  • Supervised Machine Learning
  • Transfer, Psychology*