Incremental Unsupervised Domain-Adversarial Training of Neural Networks

IEEE Trans Neural Netw Learn Syst. 2021 Nov;32(11):4864-4878. doi: 10.1109/TNNLS.2020.3025954. Epub 2021 Oct 27.

Abstract

In the context of supervised statistical learning, it is typically assumed that the training set comes from the same distribution that draws the test samples. When this is not the case, the behavior of the learned model is unpredictable and becomes dependent upon the degree of similarity between the distribution of the training set and the distribution of the test set. One of the research topics that investigates this scenario is referred to as domain adaptation (DA). Deep neural networks brought dramatic advances in pattern recognition and that is why there have been many attempts to provide good DA algorithms for these models. Herein we take a different avenue and approach the problem from an incremental point of view, where the model is adapted to the new domain iteratively. We make use of an existing unsupervised domain-adaptation algorithm to identify the target samples on which there is greater confidence about their true label. The output of the model is analyzed in different ways to determine the candidate samples. The selected samples are then added to the source training set by self-labeling, and the process is repeated until all target samples are labeled. This approach implements a form of adversarial training in which, by moving the self-labeled samples from the target to the source set, the DA algorithm is forced to look for new features after each iteration. Our results report a clear improvement with respect to the non-incremental case in several data sets, also outperforming other state-of-the-art DA algorithms.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms*
  • Humans
  • Neural Networks, Computer*
  • Pattern Recognition, Automated / methods
  • Pattern Recognition, Automated / trends*
  • Unsupervised Machine Learning / trends*