SCH-GAN: Semi-Supervised Cross-Modal Hashing by Generative Adversarial Network

IEEE Trans Cybern. 2020 Feb;50(2):489-502. doi: 10.1109/TCYB.2018.2868826. Epub 2018 Sep 26.

Abstract

Cross-modal hashing maps heterogeneous multimedia data into a common Hamming space to realize fast and flexible cross-modal retrieval. Supervised cross-modal hashing methods have achieved considerable progress by incorporating semantic side information. However, they heavily rely on large-scale labeled cross-modal training data which are hard to obtain, since multiple modalities are involved. They also ignore the rich information contained in the large amount of unlabeled data across different modalities, which can help to model the correlations between different modalities. To address these problems, in this paper, we propose a novel semi-supervised cross-modal hashing approach by generative adversarial network (SCH-GAN). The main contributions can be summarized as follows: 1) we propose a novel generative adversarial network for cross-modal hashing, in which the generative model tries to select margin examples of one modality from unlabeled data when given a query of another modality (e.g., giving a text query to retrieve images and vice versa). The discriminative model tries to distinguish the selected examples and true positive examples of the query. These two models play a minimax game so that the generative model can promote the hashing performance of the discriminative model and 2) we propose a reinforcement learning-based algorithm to drive the training of proposed SCH-GAN. The generative model takes the correlation score predicted by discriminative model as a reward, and tries to select the examples close to the margin to promote a discriminative model. Extensive experiments verify the effectiveness of our proposed approach, compared with nine state-of-the-art methods on three widely used datasets.