Graph contrastive learning with implicit augmentations

Neural Netw. 2023 Jun:163:156-164. doi: 10.1016/j.neunet.2023.04.001. Epub 2023 Apr 5.

Abstract

Existing graph contrastive learning methods rely on augmentation techniques based on random perturbations (e.g., randomly adding or dropping edges and nodes). Nevertheless, altering certain edges or nodes can unexpectedly change the graph characteristics, and choosing the optimal perturbing ratio for each dataset requires onerous manual tuning. In this paper, we introduce Implicit Graph Contrastive Learning (iGCL), which utilizes augmentations in the latent space learned from a Variational Graph Auto-Encoder by reconstructing graph topological structure. Importantly, instead of explicitly sampling augmentations from latent distributions, we further propose an upper bound for the expected contrastive loss to improve the efficiency of our learning algorithm. Thus, graph semantics can be preserved within the augmentations in an intelligent way without arbitrary manual design or prior human knowledge. Experimental results on both graph-level and node-level show that the proposed method achieves state-of-the-art accuracy on downstream classification tasks compared to other graph contrastive baselines, where ablation studies in the end demonstrate the effectiveness of modules in iGCL.

Keywords: Contrastive learning; Graph auto-encoders; Graph neural networks; Latent augmentations.

MeSH terms

  • Algorithms*
  • Humans
  • Intelligence*
  • Knowledge
  • Semantics