Adaptive Propagation Graph Convolutional Network

IEEE Trans Neural Netw Learn Syst. 2021 Oct;32(10):4755-4760. doi: 10.1109/TNNLS.2020.3025110. Epub 2021 Oct 5.

Abstract

Graph convolutional networks (GCNs) are a family of neural network models that perform inference on graph data by interleaving vertexwise operations and message-passing exchanges across nodes. Concerning the latter, two key questions arise: 1) how to design a differentiable exchange protocol (e.g., a one-hop Laplacian smoothing in the original GCN) and 2) how to characterize the tradeoff in complexity with respect to the local updates. In this brief, we show that the state-of-the-art results can be achieved by adapting the number of communication steps independently at every node. In particular, we endow each node with a halting unit (inspired by Graves' adaptive computation time [1]) that after every exchange decides whether to continue communicating or not. We show that the proposed adaptive propagation GCN (AP-GCN) achieves superior or similar results to the best proposed models so far on a number of benchmarks while requiring a small overhead in terms of additional parameters. We also investigate a regularization term to enforce an explicit tradeoff between communication and accuracy. The code for the AP-GCN experiments is released as an open-source library.