Motif Graph Neural Network

IEEE Trans Neural Netw Learn Syst. 2023 Jun 19:PP. doi: 10.1109/TNNLS.2023.3281716. Online ahead of print.

Abstract

Graphs can model complicated interactions between entities, which naturally emerge in many important applications. These applications can often be cast into standard graph learning tasks, in which a crucial step is to learn low-dimensional graph representations. Graph neural networks (GNNs) are currently the most popular model in graph embedding approaches. However, standard GNNs in the neighborhood aggregation paradigm suffer from limited discriminative power in distinguishing high-order graph structures as opposed to low-order structures. To capture high-order structures, researchers have resorted to motifs and developed motif-based GNNs. However, the existing motif-based GNNs still often suffer from less discriminative power on high-order structures. To overcome the above limitations, we propose motif GNN (MGNN), a novel framework to better capture high-order structures, hinging on our proposed motif redundancy minimization operator and injective motif combination. First, MGNN produces a set of node representations with respect to each motif. The next phase is our proposed redundancy minimization among motifs which compares the motifs with each other and distills the features unique to each motif. Finally, MGNN performs the updating of node representations by combining multiple representations from different motifs. In particular, to enhance the discriminative power, MGNN uses an injective function to combine the representations with respect to different motifs. We further show that our proposed architecture increases the expressive power of GNNs with a theoretical analysis. We demonstrate that MGNN outperforms state-of-the-art methods on seven public benchmarks on both the node classification and graph classification tasks.