Poster

NodeFormer: A Scalable Graph Structure Learning Transformer for Node Classification

Qitian Wu · Wentao Zhao · Zenan Li · David P Wipf · Junchi Yan

Hall J #121

Keywords: [ Large Graph Analysis ] [ graph transformers ] [ Graph Structure Learning ] [ graph neural networks ] [ Scalable Message Passing ]

[ Abstract ]
[ Poster [ OpenReview
Thu 1 Dec 2 p.m. PST — 4 p.m. PST
 
Spotlight presentation: Lightning Talks 1B-1
Tue 6 Dec 9 a.m. PST — 9:15 a.m. PST

Abstract:

Graph neural networks have been extensively studied for learning with inter-connected data. Despite this, recent evidence has revealed GNNs' deficiencies related to over-squashing, heterophily, handling long-range dependencies, edge incompleteness and particularly, the absence of graphs altogether. While a plausible solution is to learn new adaptive topology for message passing, issues concerning quadratic complexity hinder simultaneous guarantees for scalability and precision in large networks. In this paper, we introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes, as an important building block for a new class of Transformer networks for node classification on large graphs, dubbed as NodeFormer. Specifically, the efficient computation is enabled by a kernerlized Gumbel-Softmax operator that reduces the algorithmic complexity to linearity w.r.t. node numbers for learning latent graph structures from large, potentially fully-connected graphs in a differentiable manner. We also provide accompanying theory as justification for our design. Extensive experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs (with up to 2M nodes) and graph-enhanced applications (e.g., image classification) where input graphs are missing. The codes are available at https://github.com/qitianwu/NodeFormer.

Chat is not available.