GPENs: Graph Data Learning With Graph Propagation-Embedding Networks

IEEE Trans Neural Netw Learn Syst. 2023 Aug;34(8):3925-3938. doi: 10.1109/TNNLS.2021.3120100. Epub 2023 Aug 4.

Abstract

Compact representation of graph data is a fundamental problem in pattern recognition and machine learning area. Recently, graph neural networks (GNNs) have been widely studied for graph-structured data representation and learning tasks, such as graph semi-supervised learning, clustering, and low-dimensional embedding. In this article, we present graph propagation-embedding networks (GPENs), a new model for graph-structured data representation and learning problem. GPENs are mainly motivated by 1) revisiting of traditional graph propagation techniques for graph node context-aware feature representation and 2) recent studies on deeply graph embedding and neural network architecture. GPENs integrate both feature propagation on graph and low-dimensional embedding simultaneously into a unified network using a novel propagation-embedding architecture. GPENs have two main advantages. First, GPENs can be well-motivated and explained from feature propagation and deeply learning architecture. Second, the equilibrium representation of the propagation-embedding operation in GPENs has both exact and approximate formulations, both of which have simple closed-form solutions. This guarantees the compactivity and efficiency of GPENs. Third, GPENs can be naturally extended to multiple GPENs (M-GPENs) to address the data with multiple graph structures. Experiments on various semi-supervised learning tasks on several benchmark datasets demonstrate the effectiveness and benefits of the proposed GPENs and M-GPENs.