Skip to yearly menu bar Skip to main content


Poster

Supra-Laplacian Encoding for Transformer on Dynamic Graphs

Yannis Karmim · RaphaĆ«l Fournier-S'niehotta · Nicolas THOME

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Fully connected Graph Transformers (GT) have rapidly become prominent in the static graph community as an alternative to Message-Passing models, which suffer from a lack of expressivity, oversquashing, and under-reaching.However, in a dynamic context, by interconnecting all nodes at multiple snapshots with self-attention,GT loose both structural and temporal information. In this work, we introduce \textbf{S}upra-\textbf{LA}placian encoding for spatio-temporal \textbf{T}ransform\textbf{E}rs (\ours), a new spatio-temporal encoding to leverage the GT architecture while keeping spatio-temporal information.Specifically, we transform Discrete Time Dynamic Graphs into multi-layer graphs and take advantage of the spectral properties of their associated supra-Laplacian matrix.Our second contribution explicitly model nodes' pairwise relationships with a cross-attention mechanism, providing an accurate edge representation for dynamic link prediction.\ours outperforms numerous state-of-the-art methods based on Message-Passing Graph Neural Networks combined with recurrent models (\eg, LSTM), and Dynamic Graph Transformers,on~9 datasets. Code and instructions to reproduce our results will be open-sourced.

Live content is unavailable. Log in and register to view live content