Timezone: »
Several sampling algorithms with variance reduction have been proposed for accelerating the training of Graph Convolution Networks (GCNs). However, due to the intractable computation of optimal sampling distribution, these sampling algorithms are suboptimal for GCNs and are not applicable to more general graph neural networks (GNNs) where the message aggregator contains learned weights rather than fixed weights, such as Graph Attention Networks (GAT). The fundamental reason is that the embeddings of the neighbors or learned weights involved in the optimal sampling distribution are \emph{changing} during the training and \emph{not known a priori}, but only \emph{partially observed} when sampled, thus making the derivation of an optimal variance reduced samplers non-trivial. In this paper, we formulate the optimization of the sampling variance as an adversary bandit problem, where the rewards are related to the node embeddings and learned weights, and can vary constantly. Thus a good sampler needs to acquire variance information about more neighbors (exploration) while at the same time optimizing the immediate sampling variance (exploit). We theoretically show that our algorithm asymptotically approaches the optimal variance within a factor of 3. We show the efficiency and effectiveness of our approach on multiple datasets.
Author Information
Ziqi Liu (Ant Group)
Zhengwei Wu (Ant Financial)
Zhiqiang Zhang (Ant Financial Services Group)
Jun Zhou (Ant Financial)
Shuang Yang (Ant Financial)
Le Song (Ant Financial Services Group)
Yuan Qi (Ant Financial Services Group)
More from the Same Authors
-
2023 Poster: FAST: a Fused and Accurate Shrinkage Tree for Heterogeneous Treatment Effects Estimation »
Jia Gu · Caizhi Tang · Han Yan · Qing Cui · Longfei Li · Jun Zhou -
2023 Poster: Unleashing the Power of Graph Data Augmentation on Covariate Shift »
Yongduo Sui · Qitian Wu · Jiancan Wu · Qing Cui · Longfei Li · Jun Zhou · Xiang Wang · Xiangnan He -
2023 Poster: Language Models Can Improve Event Prediction by Few-Shot Abductive Reasoning »
Xiaoming Shi · Siqiao Xue · Kangrui Wang · Fan Zhou · James Zhang · Jun Zhou · Chenhao Tan · Hongyuan Mei -
2023 Poster: Prompt-augmented Temporal Point Process for Streaming Event Sequence »
Siqiao Xue · Yan Wang · Zhixuan Chu · Xiaoming Shi · Caigao JIANG · Hongyan Hao · Gangwei Jiang · Xiaoyun Feng · James Zhang · Jun Zhou -
2022 Spotlight: Debiased Causal Tree: Heterogeneous Treatment Effects Estimation with Unmeasured Confounding »
Caizhi Tang · Huiyuan Wang · Xinyu Li · Qing Cui · Ya-Lin Zhang · Feng Zhu · Longfei Li · Jun Zhou · Linbo Jiang -
2022 Poster: Debiased Causal Tree: Heterogeneous Treatment Effects Estimation with Unmeasured Confounding »
Caizhi Tang · Huiyuan Wang · Xinyu Li · Qing Cui · Ya-Lin Zhang · Feng Zhu · Longfei Li · Jun Zhou · Linbo Jiang -
2021 Poster: A Bi-Level Framework for Learning to Solve Combinatorial Optimization on Graphs »
Runzhong Wang · Zhigang Hua · Gan Liu · Jiayi Zhang · Junchi Yan · Feng Qi · Shuang Yang · Jun Zhou · Xiaokang Yang -
2021 Poster: MixSeq: Connecting Macroscopic Time Series Forecasting with Microscopic Time Series Data »
Zhibo Zhu · Ziqi Liu · Ge Jin · Zhiqiang Zhang · Lei Chen · Jun Zhou · Jianyong Zhou -
2019 : Invited Talk by Yuan (Alan) Qi (Ant Financial) »
Yuan Qi -
2019 Poster: Value Propagation for Decentralized Networked Deep Multi-agent Reinforcement Learning »
Chao Qu · Shie Mannor · Huan Xu · Yuan Qi · Le Song · Junwu Xiong -
2019 Poster: Generalization in Generative Adversarial Networks: A Novel Perspective from Privacy Protection »
Bingzhe Wu · Shiwan Zhao · Chaochao Chen · Haoyang Xu · Li Wang · Xiaolu Zhang · Guangyu Sun · Jun Zhou