Multi-Semantic Decoding of Visual Perception with Graph Neural Networks

Int J Neural Syst. 2024 Apr;34(4):2450016. doi: 10.1142/S0129065724500163. Epub 2024 Feb 17.

Abstract

Constructing computational decoding models to account for the cortical representation of semantic information plays a crucial role in understanding visual perception. The human visual system processes interactive relationships among different objects when perceiving the semantic contents of natural visions. However, the existing semantic decoding models commonly regard categories as completely separate and independent visually and semantically and rarely consider the relationships from prior information. In this work, a novel semantic graph learning model was proposed to decode multiple semantic categories of perceived natural images from brain activity. The proposed model was validated on the functional magnetic resonance imaging data collected from five normal subjects while viewing 2750 natural images comprising 52 semantic categories. The results showed that the Graph Neural Network-based decoding model achieved higher accuracies than other deep neural network models. Moreover, the co-occurrence probability among semantic categories showed a significant correlation with the decoding accuracy. Additionally, the results suggested that semantic content organized in a hierarchical way with higher visual areas was more closely related to the internal visual experience. Together, this study provides a superior computational framework for multi-semantic decoding that supports the visual integration mechanism of semantic processing.

Keywords: fMRI; graph neural network; natural images; semantic decoding; visual perception.

MeSH terms

  • Brain / diagnostic imaging
  • Brain Mapping* / methods
  • Humans
  • Learning
  • Magnetic Resonance Imaging / methods
  • Neural Networks, Computer
  • Semantics*
  • Visual Perception