Semantic-visual shared knowledge graph for zero-shot learning

PeerJ Comput Sci. 2023 Mar 22:9:e1260. doi: 10.7717/peerj-cs.1260. eCollection 2023.

Abstract

Almost all existing zero-shot learning methods work only on benchmark datasets (e.g., CUB, SUN, AwA, FLO and aPY) which have already provided pre-defined attributes for all the classes. These methods thus are hard to apply on real-world datasets (like ImageNet) since there are no such pre-defined attributes in the data environment. The latest works have explored to use semantic-rich knowledge graphs (such as WordNet) to substitute pre-defined attributes. However, these methods encounter a serious "role="presentation">domain shift" problem because such a knowledge graph cannot provide detailed enough semantics to describe fine-grained information. To this end, we propose a semantic-visual shared knowledge graph (SVKG) to enhance the detailed information for zero-shot learning. SVKG represents high-level information by using semantic embedding but describes fine-grained information by using visual features. These visual features can be directly extracted from real-world images to substitute pre-defined attributes. A multi-modals graph convolution network is also proposed to transfer SVKG into graph representations that can be used for downstream zero-shot learning tasks. Experimental results on the real-world datasets without pre-defined attributes demonstrate the effectiveness of our method and show the benefits of the proposed. Our method obtains a +2.8%, +0.5%, and +0.2% increase compared with the state-of-the-art in 2-hops, 3-hops, and all divisions relatively.

Keywords: Image classification; Knowledge graph; Multi-modal learning; Zero-shot learning.

Grants and funding

The work is supported by the National Natural Science Foundation of China (No. 62106216 and No. 62162064). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.