Timezone: »

 
Poster
Simplifying and Empowering Transformers for Large-Graph Representations
Qitian Wu · Wentao Zhao · Chenxiao Yang · Hengrui Zhang · Fan Nie · Haitian Jiang · Yatao Bian · Junchi Yan

Wed Dec 13 03:00 PM -- 05:00 PM (PST) @ Great Hall & Hall B1+B2 #628

Learning representations on large-sized graphs is a long-standing challenge due to the inter-dependence nature involved in massive data points. Transformers, as an emerging class of foundation encoders for graph-structured data, have shown promising performance on small graphs due to its global attention capable of capturing all-pair influence beyond neighboring nodes. Even so, existing approaches tend to inherit the spirit of Transformers in language and vision tasks, and embrace complicated models by stacking deep multi-head attentions. In this paper, we critically demonstrate that even using a one-layer attention can bring up surprisingly competitive performance across node property prediction benchmarks where node numbers range from thousand-level to billion-level. This encourages us to rethink the design philosophy for Transformers on large graphs, where the global attention is a computation overhead hindering the scalability. We frame the proposed scheme as Simplified Graph Transformers (SGFormer), which is empowered by a simple attention model that can efficiently propagate information among arbitrary nodes in one layer. SGFormer requires none of positional encodings, feature/graph pre-processing or augmented loss. Empirically, SGFormer successfully scales to the web-scale graph ogbn-papers100M and yields up to 141x inference acceleration over SOTA Transformers on medium-sized graphs. Beyond current results, we believe the proposed methodology alone enlightens a new technical path of independent interest for building Transformers on large graphs.

Author Information

Qitian Wu (Shanghai Jiao Tong University)
Wentao Zhao (Shanghai Jiao Tong University)
Chenxiao Yang (Shanghai Jiao Tong University)
Hengrui Zhang (University of Illinois, Chicago)
Fan Nie (Shanghai Jiaotong University)
Haitian Jiang (New York University)
Haitian Jiang

I am a first-year PhD student at Courant Institute, NYU, advised by Jinyang Li. My current interests are Machine Learning Systems, Machine Learning, and Graph Neural Networks.

Yatao Bian (Tencent AI Lab)
Junchi Yan (Shanghai Jiao Tong University)

More from the Same Authors