Timezone: »
Graph Neural Networks (GNNs), which aggregate features from neighbors, are widely used for processing graph-structured data due to their powerful representation learning capabilities. It is generally believed that GNNs can implicitly remove feature noises and thus still obtain generalizable models. This point of view motivates some existing works to derive GNN models from the graph signal denoising (GSD) problem. However, few works have rigorously analyzed the implicit denoising effect in graph neural networks. In this work, we conduct a comprehensive theoretical study and analyze when and why implicit denoising happens in GNNs. Our theoretical analysis suggests that the implicit denoising largely depends on the connectivity and size of the graph, as well as the GNN architectures. Moreover, extensive empirical evaluations verify our theoretical analyses and the effectiveness of GNNs in eliminating noise in the feature matrix compared with Multi-Layer Perceptron (MLP). Moreover, motivated by adversarial machine learning in improving the robustness of neural networks and the correlation of node features and graph structure in GSD, we propose the adversarial graph signal denoising (AGSD) problem. By solving such a problem, we derive a robust graph convolution, where the smoothness of the node representations and the implicit denoising effect can be enhanced.
Author Information
Songtao Liu (The Pennsylvania State University)
Rex Ying (Yale University)
Hanze Dong (The Hong Kong University of Science and Technology)
Lu Lin (Penn State University)
Jinghui Chen (Penn State University)
Dinghao Wu (Pennsylvania State University)
More from the Same Authors
-
2022 : GraphFramEx: Towards Systematic Evaluation of Explainability Methods for Graph Neural Networks »
Kenza Amara · Rex Ying · Ce Zhang -
2022 : Learning Efficient Hybrid Particle-continuum Representations of Non-equilibrium N-body Systems »
Tailin Wu · Michael Sun · Hsuan-Gu Chou · Pranay Reddy Samala · Sithipont Cholsaipant · Sophia Kivelson · Jacqueline Yau · Rex Ying · E. Paulo Alves · Jure Leskovec · Frederico Fiuza -
2022 : On the Vulnerability of Backdoor Defenses for Federated Learning »
Pei Fang · Jinghui Chen -
2022 : Accelerating Adaptive Federated Optimization with Local Gossip Communications »
Yujia Wang · Pei Fang · Jinghui Chen -
2022 : Particle-based Variational Inference with Preconditioned Functional Gradient Flow »
Hanze Dong · Xi Wang · Yong Lin · Tong Zhang -
2022 : Spectrum Guided Topology Augmentation for Graph Contrastive Learning »
Lu Lin · Jinghui Chen · Hongning Wang -
2022 : Efficient Automatic Machine Learning via Design Graphs »
Shirley Wu · Jiaxuan You · Jure Leskovec · Rex Ying -
2022 : GraphFramEx: Towards Systematic Evaluation of Explainability Methods for Graph Neural Networks »
Kenza Amara · Rex Ying · Zitao Zhang · Zhihao Han · Yinan Shan · Ulrik Brandes · Sebastian Schemm -
2022 Workshop: New Frontiers in Graph Learning »
Jiaxuan You · Marinka Zitnik · Rex Ying · Yizhou Sun · Hanjun Dai · Stefanie Jegelka -
2022 Poster: One-shot Neural Backdoor Erasing via Adversarial Weight Masking »
Shuwen Chai · Jinghui Chen -
2022 : Invited Talk »
Rex Ying