LaenNet: Learning robust GCNs by propagating labels

Neural Netw. 2023 Nov:168:652-664. doi: 10.1016/j.neunet.2023.09.035. Epub 2023 Sep 27.

Abstract

Graph Convolutional Networks (GCNs) can be acknowledged as one of the most significant methodologies for graph representation learning, and the family of GCNs has recently achieved great success in the community. However, in real-world scenarios, the graph data may be imperfect, e.g., with noisy and sparse features or labels, which poses a great challenge to the robustness of GCNs. To meet this challenge, we propose a simple-yet-effective LAbel-ENhanced Networks (LaenNet) architecture for GCNs, where the basic spirit is to propagate labels together with features. Specifically, we add an extra LaenNet module at one hidden layer of GCNs, which propagates labels along the graph and then integrates them with the hidden representations as the inputs to the deeper layer. The proposed LaenNet can be directly generalized to the variants of GCNs. We conduct extensive experiments to verify LaenNet on semi-supervised node classification tasks under four noisy and sparse graph data scenarios, including the graphs with noisy features, sparse features, noisy labels, and sparse labels. Empirical results indicate the superiority and robustness of LaenNet compared to the state-of-the-art baseline models. The implementation code is available to ease reproducibility1.

Keywords: Graph Convolutional Networks; Label; Robustness.

MeSH terms

  • Learning*
  • Reproducibility of Results