Timezone: »
Temporal Graph Networks (TGNs) are powerful on modeling temporal graph data based on their increased complexity. Higher complexity carries with it a higher risk of overfitting, which makes TGNs capture random noise instead of essential semantic information. To address this issue, our idea is to transform the temporal graphs using data augmentation (DA) with adaptive magnitudes, so as to effectively augment the input features and preserve the essential semantic information. Based on this idea, we present the MeTA (Memory Tower Augmentation) module: a multi-level module that processes the augmented graphs of different magnitudes on separate levels, and performs message passing across levels to provide adaptively augmented inputs for every prediction. MeTA can be flexibly applied to the training of popular TGNs to improve their effectiveness without increasing their time complexity. To complement MeTA, we propose three DA strategies to realistically model noise by modifying both the temporal and topological features. Empirical results on standard datasets show that MeTA yields significant gains for the popular TGN models on edge prediction and node classification in an efficient manner.
Author Information
Yiwei Wang (national university of singaore, National University of Singapore)
Yujun Cai (Nanyang Technological University)
Yuxuan Liang (National University of Singapore)
Henghui Ding (Swiss Federal Institute of Technology)
Changhu Wang (ByteDance.Inc)
Siddharth Bhatia (National University of Singapore)
Bryan Hooi (National University of Singapore)
More from the Same Authors
-
2021 Poster: Directed Graph Contrastive Learning »
Zekun Tong · Yuxuan Liang · Henghui Ding · Yongxing Dai · Xinke Li · Changhu Wang -
2021 Poster: Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning »
Yifan Zhang · Bryan Hooi · Dapeng Hu · Jian Liang · Jiashi Feng -
2021 Poster: SSMF: Shifting Seasonal Matrix Factorization »
Koki Kawabata · Siddharth Bhatia · Rui Liu · Mohit Wadhwa · Bryan Hooi -
2021 Poster: Direct Multi-view Multi-person 3D Pose Estimation »
tao wang · Jianfeng Zhang · Yujun Cai · Shuicheng Yan · Jiashi Feng -
2021 Poster: EIGNN: Efficient Infinite-Depth Graph Neural Networks »
Juncheng Liu · Kenji Kawaguchi · Bryan Hooi · Yiwei Wang · Xiaokui Xiao -
2020 Poster: Is normalization indispensable for training deep neural network? »
Jie Shao · Kai Hu · Changhu Wang · Xiangyang Xue · Bhiksha Raj -
2020 Poster: Digraph Inception Convolutional Networks »
Zekun Tong · Yuxuan Liang · Changsheng Sun · Xinke Li · David Rosenblum · Andrew Lim -
2020 Oral: Is normalization indispensable for training deep neural network? »
Jie Shao · Kai Hu · Changhu Wang · Xiangyang Xue · Bhiksha Raj