Rethinking and scaling UP graph contrastive learning: an extremely efficient approach with group discrimination

Yizhen Zheng, Shirui Pan, Cheng Siong Lee, Yu Zheng, Phillip S Yu

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

86 Citations (Scopus)

Abstract

Graph contrastive learning (GCL) alleviates the heavy reliance on label information for graph representation learning (GRL) via self-supervised learning schemes. The core idea is to learn by maximising mutual information for similar instances, which requires similarity computation between two node instances. However, GCL is inefficient in both time and memory consumption. In addition, GCL normally requires a large number of training epochs to be well-trained on large-scale datasets. Inspired by an observation of a technical defect (i.e., inappropriate usage of Sigmoid function) commonly used in two representative GCL works, DGI and MVGRL, we revisit GCL and introduce a new learning paradigm for self-supervised graph representation learning, namely, Group Discrimination (GD), and propose a novel GD-based method called Graph Group Discrimination (GGD). Instead of similarity computation, GGD directly discriminates two groups of node samples with a very simple binary cross-entropy loss. In addition, GGD requires much fewer training epochs to obtain competitive performance compared with GCL methods on large-scale datasets. These two advantages endow GGD with very efficient property. Extensive experiments show that GGD outperforms state-of-the-art self-supervised methods on eight datasets. In particular, GGD can be trained in 0.18 seconds (6.44 seconds including data preprocessing) on ogbn-arxiv, which is orders of magnitude (10,000+) faster than GCL baselines while consuming much less memory. Trained with 9 hours on ogbn-papers100M with billion edges, GGD outperforms its GCL counterparts in both accuracy and efficiency.

Original languageEnglish
Title of host publicationAdvances in Neural Information Processing Systems 35 (NeurIPS 2022)
EditorsS. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, A. Oh
Place of PublicationSan Diego CA USA
PublisherNeural Information Processing Systems (NIPS)
Number of pages12
ISBN (Electronic)9781713871088
Publication statusPublished - 2022
EventAdvances in Neural Information Processing Systems 2022 - New Orleans Convention Center, New Orleans, United States of America
Duration: 28 Nov 20229 Dec 2022
Conference number: 36th
https://proceedings.neurips.cc/paper_files/paper/2022 (Proceedings)
https://nips.cc/Conferences/2022
https://openreview.net/group?id=NeurIPS.cc/2022/Conference (Peer Reviews)

Publication series

NameAdvances in Neural Information Processing Systems
Volume35
ISSN (Print)1049-5258

Conference

ConferenceAdvances in Neural Information Processing Systems 2022
Abbreviated titleNeurIPS 2022
Country/TerritoryUnited States of America
CityNew Orleans
Period28/11/229/12/22
Internet address

Keywords

  • graph contrastive learning
  • representation learning
  • Self-supervised learning
  • unsupervised learning

Cite this