Cost-effective framework for gradual domain adaptation with multifidelity

Neural Netw. 2023 Jul:164:731-741. doi: 10.1016/j.neunet.2023.03.035. Epub 2023 Mar 27.

Abstract

In domain adaptation, when there is a large distance between the source and target domains, the prediction performance will degrade. Gradual domain adaptation is one of the solutions to such an issue, assuming that we have access to intermediate domains, which shift gradually from the source to the target domain. In previous works, it was assumed that the number of samples in the intermediate domains was sufficiently large; hence, self-training was possible without the need for labeled data. If the number of accessible intermediate domains is restricted, the distances between domains become large, and self-training will fail. Practically, the cost of samples in intermediate domains will vary, and it is natural to consider that the closer an intermediate domain is to the target domain, the higher the cost of obtaining samples from the intermediate domain is. To solve the trade-off between cost and accuracy, we propose a framework that combines multifidelity and active domain adaptation. The effectiveness of the proposed method is evaluated by experiments with real-world datasets.

Keywords: Active learning; Gradual domain adaptation; Multifidelity learning.

MeSH terms

  • Cost-Benefit Analysis*