Timezone: »

 
Poster
Provably Powerful Graph Networks
Haggai Maron · Heli Ben-Hamu · Hadar Serviansky · Yaron Lipman

Tue Dec 10 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #71
Recently, the Weisfeiler-Lehman (WL) graph isomorphism test was used to measure the expressive power of graph neural networks (GNN). It was shown that the popular message passing GNN cannot distinguish between graphs that are indistinguishable by the $1$-WL test \citep{morris2019,xu2019}. Unfortunately, many simple instances of graphs are indistinguishable by the $1$-WL test. In search for more expressive graph learning models we build upon the recent $k$-order invariant and equivariant graph neural networks \citep{maron2019} and present two results: First, we show that such $k$-order networks can distinguish between non-isomorphic graphs as good as the $k$-WL tests, which are provably stronger than the $1$-WL test for $k>2$. This makes these models strictly stronger than message passing models. Unfortunately, the higher expressiveness of these models comes with a computational cost of processing high order tensors. Second, setting our goal at building a provably stronger, \emph{simple} and \emph{scalable} model we show that a reduced $2$-order network containing just scaled identity operator, augmented with a single quadratic operation (matrix multiplication) has a provable $3$-WL expressive power. Differently put, we suggest a simple model that interleaves applications of standard Multilayer-Perceptron (MLP) applied to the feature dimension and matrix multiplication. We validate this model by presenting state of the art results on popular graph classification and regression tasks. To the best of our knowledge, this is the first practical invariant/equivariant model with guaranteed $3$-WL expressiveness, strictly stronger than message passing models.

Author Information

Haggai Maron (NVIDIA Research)

I am a PhD student at the Department of Computer Science and Applied Mathematics at the Weizmann Institute of Science under the supervision of Prof. Yaron Lipman. My main fields of interest are machine learning, optimization and shape analysis. More specifically I am working on applying deep learning to irregular domains (e.g., graphs, point clouds, and surfaces) and graph/shape matching problems. I serve as a reviewer for NeurIPS, ICCV, SIGGRAPH, SIGGRAPH Asia, ACM TOG, JAIR, TVCG and SGP.

Heli Ben-Hamu (Weizmann Institute of Science)
Hadar Serviansky (Weizmann Institute of Science)
Yaron Lipman (Weizmann Institute of Science)

More from the Same Authors