TL-ADA: Transferable Loss-based Active Domain Adaptation

Neural Netw. 2023 Apr:161:670-681. doi: 10.1016/j.neunet.2023.02.004. Epub 2023 Feb 14.

Abstract

The field of Active Domain Adaptation (ADA) has been investigating ways to close the performance gap between supervised and unsupervised learning settings. Previous ADA research has primarily focused on query selection, but there has been little examination of how to effectively train newly labeled target samples using both labeled source samples and unlabeled target samples. In this study, we present a novel Transferable Loss-based ADA (TL-ADA) framework. Our approach is inspired by loss-based query selection, which has shown promising results in active learning. However, directly applying loss-based query selection to the ADA scenario leads to a buildup of high-loss samples that do not contribute to the model due to transferability issues and low diversity. To address these challenges, we propose a transferable doubly nested loss, which incorporates target pseudo labels and a domain adversarial loss. Our TL-ADA framework trains the model sequentially, considering both the domain type (source/target) and the availability of labels (labeled/unlabeled). Additionally, we encourage the pseudo labels to have low self-entropy and diverse class distributions to improve their reliability. Experiments on several benchmark datasets demonstrate that our TL-ADA model outperforms previous ADA methods, and in-depth analysis supports the effectiveness of our proposed approach.

Keywords: Active Domain Adaptation; Loss prediction; Pseudo labels; Ranking loss; Transferable query selection.

MeSH terms

  • Benchmarking*
  • Entropy
  • Reproducibility of Results