Domain Adaptation Preconceived Hashing for Unconstrained Visual Retrieval

IEEE Trans Neural Netw Learn Syst. 2022 Oct;33(10):5641-5655. doi: 10.1109/TNNLS.2021.3071127. Epub 2022 Oct 5.

Abstract

Learning to hash has been widely applied for image retrieval due to the low storage and high retrieval efficiency. Existing hashing methods assume that the distributions of the retrieval pool (i.e., the data sets being retrieved) and the query data are similar, which, however, cannot truly reflect the real-world condition due to the unconstrained visual cues, such as illumination, pose, background, and so on. Due to the large distribution gap between the retrieval pool and the query set, the performances of traditional hashing methods are seriously degraded. Therefore, we first propose a new efficient but transferable hashing model for unconstrained cross-domain visual retrieval, in which the retrieval pool and the query sample are drawn from different but semantic relevant domains. Specifically, we propose a simple yet effective unsupervised hashing method, domain adaptation preconceived hashing (DAPH), toward learning domain-invariant hashing representation. Three merits of DAPH are observed: 1) to the best of our knowledge, we first propose unconstrained visual retrieval by introducing DA into hashing for learning transferable hashing codes; 2) a domain-invariant feature transformation with marginal discrepancy distance minimization and feature reconstruction constraint is learned, such that the hashing code is not only domain adaptive but content preserved; and 3) a DA preconceived quantization loss is proposed, which further guarantees the discrimination of the learned hashing code for sample retrieval. Extensive experiments on various benchmark data sets verify that our DAPH outperforms many state-of-the-art hashing methods toward unconstrained (unrestricted) instance retrieval in both single- and cross-domain scenarios.