Knowledge-Embedded Mutual Guidance for Visual Reasoning

IEEE Trans Cybern. 2024 Apr;54(4):2579-2591. doi: 10.1109/TCYB.2023.3310892. Epub 2024 Mar 18.

Abstract

Visual reasoning between visual images and natural language is a long-standing challenge in computer vision. Most of the methods aim to look for answers to questions only on the basis of the analysis of the offered questions and images. Other approaches treat knowledge graphs as flattened tables to search for the answer. However, there are two major problems with these works: 1) the model disregards the fact that the world we surrounding us interlinks our hearing and speaking of natural language and 2) the model largely ignores the structure of the acrlong KG. To overcome these challenging deficiencies, a model should jointly consider two modalities of vision and language, as well as the rich structural and logical information embedded in knowledge graphs. To this end, we propose a general joint representation learning framework for visual reasoning, namely, knowledge-embedded mutual guidance. It realizes mutual guidance not only between visual data and natural language descriptions but also between knowledge graphs and reasoning models. In addition, it exploits the knowledge derived from the reasoning model to boost knowledge graphs when applying the visual relation detection task. The experimental results demonstrate that the proposed approach performs dramatically better than state-of-the-art methods on two benchmarks for visual reasoning.