Re-Attention for Visual Question Answering

IEEE Trans Image Process. 2021:30:6730-6743. doi: 10.1109/TIP.2021.3097180. Epub 2021 Jul 26.

Abstract

A simultaneous understanding of questions and images is crucial in Visual Question Answering (VQA). While the existing models have achieved satisfactory performance by associating questions with key objects in images, the answers also contain rich information that can be used to describe the visual contents in images. In this paper, we propose a re-attention framework to utilize the information in answers for the VQA task. The framework first learns the initial attention weights for the objects by calculating the similarity of each word-object pair in the feature space. Then, the visual attention map is reconstructed by re-attending the objects in images based on the answer. Through keeping the initial visual attention map and the reconstructed one to be consistent, the learned visual attention map can be corrected by the answer information. Besides, we introduce a gate mechanism to automatically control the contribution of re-attention to model training based on the entropy of the learned initial visual attention maps. We conduct experiments on three benchmark datasets, and the results demonstrate the proposed model performs favorably against state-of-the-art methods.