Skip to yearly menu bar Skip to main content

Workshop: 4th Workshop on Self-Supervised Learning: Theory and Practice

Evolving Graph Generalization Estimation via Self-Supervised Learning

Bin Lu · Tingyan Ma · Xiaoying Gan · Luoyi Fu · Xinbing Wang · Chenghu Zhou · Shiyu Liang


Graph Neural Networks are widely deployed in vast fields, but they often struggle to maintain accurate representations as graphs evolve. We theoretically establish a lower bound, proving that under mild conditions, representation distortion inevitably occurs over time. To estimate the temporal representation distortion without human annotation after deployment, one naive approach is to pre-train a recurrent model before deployment and use this model afterwards, but the estimation is far from satisfactory. In this paper, we analyze the representation distortion from an information theory perspective, and attribute it primarily to inaccurate feature extraction during evolution.Consequently, we introduce Smart, a straightforward and effective baseline enhanced by an adaptive feature extractor through self-supervised graph reconstruction. Experimental results on real-world evolving graphs demonstrate our outstanding performance, especially the necessity of self-supervised graph reconstruction. For example, on OGB-arXiv dataset, the estimation metric MAPE deteriorates from 2.19\% to 8.00\% without reconstruction.

Chat is not available.