BCAN: Bidirectional Correct Attention Network for Cross-Modal Retrieval

IEEE Trans Neural Netw Learn Syst. 2023 May 31:PP. doi: 10.1109/TNNLS.2023.3276796. Online ahead of print.

Abstract

As a fundamental topic in bridging the gap between vision and language, cross-modal retrieval purposes to obtain the correspondences' relationship between fragments, i.e., subregions in images and words in texts. Compared with earlier methods that focus on learning the visual semantic embedding from images and sentences to the shared embedding space, the existing methods tend to learn the correspondences between words and regions via cross-modal attention. However, such attention-based approaches invariably result in semantic misalignment between subfragments for two reasons: 1) without modeling the relationship between subfragments and the semantics of the entire images or sentences, it will be hard for such approaches to distinguish images or sentences with multiple same semantic fragments and 2) such approaches focus attention evenly on all subfragments, including nonvisual words and a lot of redundant regions, which also will face the problem of semantic misalignment. To solve these problems, this article proposes a bidirectional correct attention network (BCAN), which introduces a novel concept of the relevance between subfragments and the semantics of the entire images or sentences and designs a novel correct attention mechanism by modeling the local and global similarity between images and sentences to correct the attention weights focused on the wrong fragments. Specifically, we introduce a concept about the semantic relationship between subfragments and entire images or sentences and use this concept to solve the semantic misalignment from two aspects. In our correct attention mechanism, we design two independent units to correct the weight of attention focused on the wrong fragments. Global correct unit (GCU) with modeling the global similarity between images and sentences into the attention mechanism to solve the semantic misalignment problem caused by focusing attention on relevant subfragments in irrelevant pairs (RI) and the local correct unit (LCU) consider the difference in the attention weights between fragments among two steps to solve the semantic misalignment problem caused by focusing attention on irrelevant subfragments in relevant pairs (IR). Extensive experiments on large-scale MS-COCO and Flickr30K show that our proposed method outperforms all the attention-based methods and is competitive to the state-of-the-art. Our code and pretrained model are publicly available at: https://github.com/liuyyy111/BCAN.