Class-specific Reconstruction Transfer Learning for Visual Recognition Across Domains

IEEE Trans Image Process. 2019 Nov 5. doi: 10.1109/TIP.2019.2948480. Online ahead of print.

Abstract

Subspace learning and reconstruction have been widely explored in recent transfer learning work. Generally, a specially designed projection and reconstruction transfer functions bridging multiple domains for heterogeneous knowledge sharing are wanted. However, we argue that the existing subspace reconstruction based domain adaptation algorithms neglect the class prior, such that the learned transfer function is biased, especially when data scarcity of some class is encountered. Different from those previous methods, in this paper, we propose a novel class-wise reconstruction-based adaptation method called Class-specific Reconstruction Transfer Learning (CRTL), which optimizes a well modeled transfer loss function by fully exploiting intra-class dependency and inter-class independency. The merits of the CRTL are three-fold. 1) Using a class-specific reconstruction matrix to align the source domain with the target domain fully exploits the class prior in modeling the domain distribution consistency, which benefits the cross-domain classification. 2) Furthermore, to keep the intrinsic relationship between data and labels after feature augmentation, a projected Hilbert-Schmidt Independence Criterion (pHSIC), that measures the dependency between data and label, is first proposed in transfer learning community by mapping the data from raw space to RKHS. 3) In addition, by imposing low-rank and sparse constraints on the class-specific reconstruction coefficient matrix, the global and local data structure that contributes to domain correlation can be effectively preserved. Extensive experiments on challenging benchmark datasets demonstrate the superiority of the proposed method over state-of-the-art representation-based domain adaptation methods. The demo code is available in https://github.com/wangshanshanCQU/CRTL.