Timezone: »
Graph neural networks (GNNs) are widely used for modelling graph-structured data in numerous applications. However, with their inherently finite aggregation layers, existing GNN models may not be able to effectively capture long-range dependencies in the underlying graphs. Motivated by this limitation, we propose a GNN model with infinite depth, which we call Efficient Infinite-Depth Graph Neural Networks (EIGNN), to efficiently capture very long-range dependencies. We theoretically derive a closed-form solution of EIGNN which makes training an infinite-depth GNN model tractable. We then further show that we can achieve more efficient computation for training EIGNN by using eigendecomposition. The empirical results of comprehensive experiments on synthetic and real-world datasets show that EIGNN has a better ability to capture long-range dependencies than recent baselines, and consistently achieves state-of-the-art performance. Furthermore, we show that our model is also more robust against both noise and adversarial perturbations on node features.
Author Information
Juncheng Liu (National University of Singapore)
Kenji Kawaguchi (MIT)
Bryan Hooi (National University of Singapore)
Yiwei Wang (national university of singaore, National University of Singapore)
Xiaokui Xiao (National University of Singapore)
More from the Same Authors
-
2021 : Catastrophic Failures of Neural Active Learning on Heteroskedastic Distributions »
Savya Khosla · Alex Lamb · Jordan Ash · Cyril Zhang · Kenji Kawaguchi -
2021 : Noether Networks: Meta-Learning Useful Conserved Quantities »
Ferran Alet · Dylan Doblar · Allan Zhou · Josh Tenenbaum · Kenji Kawaguchi · Chelsea Finn -
2022 : KeyNote 2 by Bryan Hooi : Temporal Graph Learning: Some Challenges and Recent Directions »
Bryan Hooi -
2022 Poster: Finite-Time Regret of Thompson Sampling Algorithms for Exponential Family Multi-Armed Bandits »
Tianyuan Jin · Pan Xu · Xiaokui Xiao · Anima Anandkumar -
2022 Poster: MGNNI: Multiscale Graph Neural Networks with Implicit Layers »
Juncheng Liu · Bryan Hooi · Kenji Kawaguchi · Xiaokui Xiao -
2022 Poster: Self-Supervised Aggregation of Diverse Experts for Test-Agnostic Long-Tailed Recognition »
Yifan Zhang · Bryan Hooi · Lanqing Hong · Jiashi Feng -
2021 Poster: Adversarial Training Helps Transfer Learning via Better Representations »
Zhun Deng · Linjun Zhang · Kailas Vodrahalli · Kenji Kawaguchi · James Zou -
2021 Poster: Understanding End-to-End Model-Based Reinforcement Learning Methods as Implicit Parameterization »
Clement Gehring · Kenji Kawaguchi · Jiaoyang Huang · Leslie Kaelbling -
2021 Poster: Adaptive Data Augmentation on Temporal Graphs »
Yiwei Wang · Yujun Cai · Yuxuan Liang · Henghui Ding · Changhu Wang · Siddharth Bhatia · Bryan Hooi -
2021 Poster: Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning »
Yifan Zhang · Bryan Hooi · Dapeng Hu · Jian Liang · Jiashi Feng -
2021 Poster: SSMF: Shifting Seasonal Matrix Factorization »
Koki Kawabata · Siddharth Bhatia · Rui Liu · Mohit Wadhwa · Bryan Hooi -
2021 Poster: Noether Networks: meta-learning useful conserved quantities »
Ferran Alet · Dylan Doblar · Allan Zhou · Josh Tenenbaum · Kenji Kawaguchi · Chelsea Finn -
2021 Poster: Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time »
Ferran Alet · Maria Bauza · Kenji Kawaguchi · Nurullah Giray Kuru · Tomás Lozano-Pérez · Leslie Kaelbling -
2021 Poster: Discrete-Valued Neural Communication »
Dianbo Liu · Alex Lamb · Kenji Kawaguchi · Anirudh Goyal · Chen Sun · Michael Mozer · Yoshua Bengio -
2019 Poster: Efficient Pure Exploration in Adaptive Round Model »
Tianyuan Jin · Jieming SHI · Xiaokui Xiao · Enhong Chen -
2016 Poster: Deep Learning without Poor Local Minima »
Kenji Kawaguchi -
2016 Oral: Deep Learning without Poor Local Minima »
Kenji Kawaguchi -
2015 Poster: Bayesian Optimization with Exponential Convergence »
Kenji Kawaguchi · Leslie Kaelbling · Tomás Lozano-Pérez