Skip to yearly menu bar Skip to main content


Poster

FUG: Feature-Universal Graph Contrastive Pre-training for Graphs with Diverse Node Features

Jitao Zhao · Dongxiao He · Meng Ge · Lianze Shan · Xin Wang · Di Jin · Zhiyong Feng

[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract: Graph Neural Networks (GNNs), known for their effective graph encoding, are extensively used across various fields. Graph self-supervised pre-training, which self-supervisedly trains GNN encoders to generate high-quality graph representations, has garnered widespread attention. However, due to the inherent complex characteristics in graphs, GNNs encoders pre-trained on one dataset struggle to directly adapt to others that has different node feature shapes. This typically necessitates either model rebuilding or data alignment. The former results in non-transferability as each dataset need to rebuild a new model, while the latter brings serious knowledge loss since it forces features into a uniform shape by preprocessing such as Principal Component Analysis (PCA). To address this challenge, we propose a new Feature-Universal Graph contrastive pre-training strategy (FUG) that naturally avoids the need for model rebuilding and data reshaping. Specifically, inspired by discussions in existing work on the relationship between Contrastive Learning (CL) and PCA, we conduct a theoretical analysis and discover that PCA's optimization objectives is a special case of that in CL. We design an encoder with CL constraints to emulate PCA's generation of basis transformation matrix, which is utilized to losslessly adapt features in different datasets. Furthermore, we introduced a global uniformity constraint to replace negative sampling, reducing the time complexity from $O(n^2)$ to $O(n)$, and by explicitly defining positive samples, FUG avoids the substantial memory requirements of data augmentation. In cross domain experiments, FUG has performances close to the re-trained new models.

Live content is unavailable. Log in and register to view live content