Timezone: »
Graph Contrastive Learning (GCL), learning the node representations by augmenting graphs, has attracted considerable attentions. Despite the proliferation of various graph augmentation strategies, there are still some fundamental questions unclear: what information is essentially learned by GCL? Are there some general augmentation rules behind different augmentations? If so, what are they and what insights can they bring? In this paper, we answer these questions by establishing the connection between GCL and graph spectrum. By an experimental investigation in spectral domain, we firstly find the General grAph augMEntation (GAME) rule for GCL, i.e., the difference of the high-frequency parts between two augmented graphs should be larger than that of low-frequency parts. This rule reveals the fundamental principle to revisit the current graph augmentations and design new effective graph augmentations. Then we theoretically prove that GCL is able to learn the invariance information by contrastive invariance theorem, together with our GAME rule, for the first time, we uncover that the learned representations by GCL essentially encode the low-frequency information, which explains why GCL works. Guided by this rule, we propose a spectral graph contrastive learning module (SpCo), which is a general and GCL-friendly plug-in. We combine it with different existing GCL models, and extensive experiments well demonstrate that it can further improve the performances of a wide variety of different GCL methods.
Author Information
Nian Liu (Beijing University of Post and Telecommunication)
Xiao Wang (Beijing University of Post and Telecommunication)
Deyu Bo (Beijing University of Post and Telecommunication)
Chuan Shi (Beijing University of Post and Telecommunication, Tsinghua University)
Jian Pei (Simon Fraser University)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Revisiting Graph Contrastive Learning from the Perspective of Graph Spectrum »
Dates n/a. Room
More from the Same Authors
-
2022 Poster: Debiasing Graph Neural Networks via Learning Disentangled Causal Substructure »
Shaohua Fan · Xiao Wang · Yanhu Mo · Chuan Shi · Jian Tang -
2022 Spotlight: Lightning Talks 6A-1 »
Ziyi Wang · Nian Liu · Yaming Yang · Qilong Wang · Yuanxin Liu · Zongxin Yang · Yizhao Gao · Yanchen Deng · Dongze Lian · Nanyi Fei · Ziyu Guan · Xiao Wang · Shufeng Kong · Xumin Yu · Daquan Zhou · Yi Yang · Fandong Meng · Mingze Gao · Caihua Liu · Yongming Rao · Zheng Lin · Haoyu Lu · Zhe Wang · Jiashi Feng · Zhaolin Zhang · Deyu Bo · Xinchao Wang · Chuan Shi · Jiangnan Li · Jiangtao Xie · Jie Zhou · Zhiwu Lu · Wei Zhao · Bo An · Jiwen Lu · Peihua Li · Jian Pei · Hao Jiang · Cai Xu · Peng Fu · Qinghua Hu · Yijie Li · Weigang Lu · Yanan Cao · Jianbin Huang · Weiping Wang · Zhao Cao · Jie Zhou -
2022 Spotlight: Debiasing Graph Neural Networks via Learning Disentangled Causal Substructure »
Shaohua Fan · Xiao Wang · Yanhu Mo · Chuan Shi · Jian Tang -
2022 Poster: Uncovering the Structural Fairness in Graph Contrastive Learning »
Ruijia Wang · Xiao Wang · Chuan Shi · Le Song -
2022 Poster: Accelerated Zeroth-Order and First-Order Momentum Methods from Mini to Minimax Optimization »
Feihu Huang · Shangqian Gao · Jian Pei · Heng Huang -
2021 Poster: Robust Counterfactual Explanations on Graph Neural Networks »
Mohit Bajaj · Lingyang Chu · Zi Yu Xue · Jian Pei · Lanjun Wang · Peter Cho-Ho Lam · Yong Zhang -
2021 Poster: Universal Graph Convolutional Networks »
Di Jin · Zhizhi Yu · Cuiying Huo · Rui Wang · Xiao Wang · Dongxiao He · Jiawei Han -
2021 Poster: Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration »
Xiao Wang · Hongrui Liu · Chuan Shi · Cheng Yang