Review for NeurIPS paper: Graph Contrastive Learning with Augmentations

Neural Information Processing Systems 

Summary and Contributions: This paper proposes a contrastive learning algorithm to learn graph representations in an unsupervised manner. It is an extension of SimCLR [1] applied to learn graph representations that can be used for different graph classification tasks, either in semi-supervised learning, unsupervised learning or transfer learning scenarios. To do so, the authors propose several graph augmentation techniques that are needed for the contrastive learning algorithm, and analyse its effects on different types of datasets. The four different types of data augmentation techniques explored in the paper are: node dropping, edge perturbation, attribute masking and subgraph. In their empirical study, the authors explore the effect of these data augmentation techniques in different kinds of graph structure data like social networks and biochemical molecules, showing that different techniques work better on each domain, depending on the nature of the structure represented by the graph. This pre-training technique shows promising results across different datasets and tasks.