Dual-Correction-Adaptation Network for Noisy Knowledge Transfer

IEEE Trans Neural Netw Learn Syst. 2023 Oct 19:PP. doi: 10.1109/TNNLS.2023.3322390. Online ahead of print.

Abstract

Unsupervised domain adaptation (UDA) promotes target learning via a single-directional transfer from label-rich source domain to unlabeled target, while its reverse adaption from target to source has not been jointly considered yet. In real teaching practice, a teacher helps students learn and also gets promotion from students, and such a virtuous cycle inspires us to explore dual-directional transfer between domains. In fact, target pseudo-labels predicted by source commonly involve noise due to model bias; moreover, source domain usually contains innate noise, which inevitably aggravates target noise, leading to noise amplification. Transfer from target to source exploits target knowledge to rectify the adaptation, consequently enables better source transfer, and exploits a virtuous transfer circle. To this end, we propose a dual-correction-adaptation network (DualCAN), in which adaptation and correction cycle between domains, such that learning in both domains can be boosted gradually. To the best of our knowledge, this is the first naive attempt of dual-directional adaptation. Empirical results validate DualCAN with remarkable performance gains, particularly for extreme noisy tasks (e.g., approximately + 10 % on D → A of Office-31 with 40 % label corruption).