Timezone: »

Understanding Non-linearity in Graph Neural Networks from the Bayesian-Inference Perspective
Rongzhe Wei · Haoteng YIN · Junteng Jia · Austin Benson · Pan Li

Wed Nov 30 09:00 AM -- 11:00 AM (PST) @ Hall J #1038

Graph neural networks (GNNs) have shown superiority in many prediction tasks over graphs due to their impressive capability of capturing nonlinear relations in graph-structured data. However, for node classification tasks, often, only marginal improvement of GNNs has been observed in practice over their linear counterparts. Previous works provide very few understandings of this phenomenon. In this work, we resort to Bayesian learning to give an in-depth investigation of the functions of non-linearity in GNNs for node classification tasks. Given a graph generated from the statistical model CSBM, we observe that the max-a-posterior estimation of a node label given its own and neighbors' attributes consists of two types of non-linearity, the transformation of node attributes and a ReLU-activated feature aggregation from neighbors. The latter surprisingly matches the type of non-linearity used in many GNN models. By further imposing Gaussian assumption on node attributes, we prove that the superiority of those ReLU activations is only significant when the node attributes are far more informative than the graph structure, which nicely explains previous empirical observations. A similar argument is derived when there is a distribution shift of node attributes between the training and testing datasets. Finally, we verify our theory on both synthetic and real-world networks. Our code is available at https://github.com/Graph-COM/Bayesian_inference_based_GNN.git.

Author Information

Rongzhe Wei (Purdue University)
Haoteng YIN (Purdue University)
Junteng Jia (Meta AI)
Austin Benson (D. E. Shaw Group)
Pan Li (Georgia Institute of Technology)

More from the Same Authors