A learnable sampling method for scalable graph neural networks

Neural Netw. 2023 May:162:412-424. doi: 10.1016/j.neunet.2023.03.015. Epub 2023 Mar 11.

Abstract

With the development of graph neural networks, how to handle large-scale graph data has become an increasingly important topic. Currently, most graph neural network models which can be extended to large-scale graphs are based on random sampling methods. However, the sampling process in these models is detached from the forward propagation of neural networks. Moreover, quite a few works design sampling based on statistical estimation methods for graph convolutional networks and the weights of message passing in GCNs nodes are fixed, making these sampling methods not scalable to message passing networks with variable weights, such as graph attention networks. Noting the end-to-end learning capability of neural networks, we propose a learnable sampling method. It solves the problem that random sampling operations cannot calculate gradients and samples nodes with an unfixed probability. In this way, the sampling process is dynamically combined with the forward propagation process of the features, allowing for better training of the networks. And it can be generalized to all message passing models. In addition, we apply the learnable sampling method to GNNs and propose two models. Our method can be flexibly combined with different graph neural network models and achieves excellent accuracy on benchmark datasets with large graphs. Meanwhile, loss function converges to smaller values at a faster rate during training than past methods.

Keywords: Graph neural networks; Large-scale data; Learnable sampling method.

MeSH terms

  • Benchmarking*
  • Learning*
  • Neural Networks, Computer
  • Probability