Timezone: »
The Transformer architecture has achieved remarkable success in a number of domains including natural language processing and computer vision. However, when it comes to graph-structured data, transformers have not achieved competitive performance, especially on large graphs. In this paper, we identify the main deficiencies of current graph transformers: (1) Existing node sampling strategies in Graph Transformers are agnostic to the graph characteristics and the training process. (2) Most sampling strategies only focus on local neighbors and neglect the long-range dependencies in the graph. We conduct experimental investigations on synthetic datasets to show that existing sampling strategies are sub-optimal. To tackle the aforementioned problems, we formulate the optimization strategies of node sampling in Graph Transformer as an adversary bandit problem, where the rewards are related to the attention weights and can vary in the training procedure. Meanwhile, we propose a hierarchical attention scheme with graph coarsening to capture the long-range interactions while reducing computational complexity. Finally, we conduct extensive experiments on real-world datasets to demonstrate the superiority of our method over existing graph transformers and popular GNNs.
Author Information
ZAIXI ZHANG (University of Science and Technology of China)
Qi Liu (" University of Science and Technology of China, China")
Qingyong Hu (Department of Computer Science and Engineering, Hong Kong University of Science and Technology)
Chee-Kong Lee (Tencent Quantum Lab)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: Hierarchical Graph Transformer with Adaptive Node Sampling »
Wed. Dec 7th 05:00 -- 07:00 PM Room
More from the Same Authors
-
2022 Poster: DARE: Disentanglement-Augmented Rationale Extraction »
Linan Yue · Qi Liu · Yichao Du · Yanqing An · Li Wang · Enhong Chen -
2022 Spotlight: Lightning Talks 5B-4 »
Yuezhi Yang · Zeyu Yang · Yong Lin · Yi.shi Xu · Linan Yue · Tao Yang · Weixin Chen · Qi Liu · Jiaqi Chen · Dongsheng Wang · Baoyuan Wu · Yuwang Wang · Hao Pan · Shengyu Zhu · Zhenwei Miao · Yan Lu · Lu Tan · Bo Chen · Yichao Du · Haoqian Wang · Wei Li · Yanqing An · Ruiying Lu · Peng Cui · Nanning Zheng · Li Wang · Zhibin Duan · Xiatian Zhu · Mingyuan Zhou · Enhong Chen · Li Zhang -
2022 Spotlight: DARE: Disentanglement-Augmented Rationale Extraction »
Linan Yue · Qi Liu · Yichao Du · Yanqing An · Li Wang · Enhong Chen -
2021 Poster: Motif-based Graph Self-Supervised Learning for Molecular Property Prediction »
ZAIXI ZHANG · Qi Liu · Hao Wang · Chengqiang Lu · Chee-Kong Lee -
2020 Poster: Sampling-Decomposable Generative Adversarial Recommender »
Binbin Jin · Defu Lian · Zheng Liu · Qi Liu · Jianhui Ma · Xing Xie · Enhong Chen