Contrastive learning of graphs under label noise

Neural Netw. 2024 Apr:172:106113. doi: 10.1016/j.neunet.2024.106113. Epub 2024 Jan 6.

Abstract

In the domain of graph-structured data learning, semi-supervised node classification serves as a critical task, relying mainly on the information from unlabeled nodes and a minor fraction of labeled nodes for training. However, real-world graph-structured data often suffer from label noise, which significantly undermines the performance of Graph Neural Networks (GNNs). This problem becomes increasingly severe in situations where labels are scarce. To tackle this issue of sparse and noisy labels, we propose a novel approach Contrastive Robust Graph Neural Network (CR-GNN), Firstly, considering label sparsity and noise, we employ unsupervised contrastive loss and further incorporate homophily in the graph structure, thus introducing neighbor contrastive loss. Moreover, data augmentation is typically used to construct positive and negative samples in contrastive learning, which may result in inconsistent prediction outcomes. Based on this, we propose a dynamic cross-entropy loss, which selects the nodes with consistent predictions as reliable nodes for cross-entropy loss and benefits to mitigate the overfitting to labeling noise. Finally, we propose cross-space consistency to narrow the semantic gap between the contrast and classification spaces. Extensive experiments on multiple publicly available datasets demonstrate that CR-GNN notably outperforms existing methods in resisting label noise.

Keywords: Contrastive learning; Graph neural networks; Noisy labels.

MeSH terms

  • Entropy
  • Learning*
  • Neural Networks, Computer*
  • Semantics