Online Asymmetric Metric Learning With Multi-Layer Similarity Aggregation for Cross-Modal Retrieval

IEEE Trans Image Process. 2019 Sep;28(9):4299-4312. doi: 10.1109/TIP.2019.2908774. Epub 2019 Apr 2.

Abstract

Cross-modal retrieval has attracted intensive attention in recent years, where a substantial yet challenging problem is how to measure the similarity between heterogeneous data modalities. Despite using modality-specific representation learning techniques, most existing shallow or deep models treat different modalities equally and neglect the intrinsic modality heterogeneity and information imbalance among images and texts. In this paper, we propose an online similarity function learning framework to learn the metric that can well reflect the cross-modal semantic relation. Considering that multiple CNN feature layers naturally represent visual information from low-level visual patterns to high-level semantic abstraction, we propose a new asymmetric image-text similarity formulation which aggregates the layer-wise visual-textual similarities parameterized by different bilinear parameter matrices. To effectively learn the aggregated similarity function, we develop three different similarity combination strategies, i.e., average kernel, multiple kernel learning, and layer gating. The former two kernel-based strategies assign uniform weights on different layers to all data pairs; the latter works on the original feature representation and assigns instance-aware weights on different layers to different data pairs, and they are all learned by preserving the bi-directional relative similarity expressed by a large number of cross-modal training triplets. The experiments conducted on three public datasets well demonstrate the effectiveness of our methods.