Timezone: »
Graph Neural Networks (GNNs) are limited in their expressive power, struggle with long-range interactions and lack a principled way to model higher-order structures. These problems can be attributed to the strong coupling between the computational graph and the input graph structure. The recently proposed Message Passing Simplicial Networks naturally decouple these elements by performing message passing on the clique complex of the graph. Nevertheless, these models can be severely constrained by the rigid combinatorial structure of Simplicial Complexes (SCs). In this work, we extend recent theoretical results on SCs to regular Cell Complexes, topological objects that flexibly subsume SCs and graphs. We show that this generalisation provides a powerful set of graph "lifting" transformations, each leading to a unique hierarchical message passing procedure. The resulting methods, which we collectively call CW Networks (CWNs), are strictly more powerful than the WL test and not less powerful than the 3-WL test. In particular, we demonstrate the effectiveness of one such scheme, based on rings, when applied to molecular graph problems. The proposed architecture benefits from provably larger expressivity than commonly used GNNs, principled modelling of higher-order signals and from compressing the distances between nodes. We demonstrate that our model achieves state-of-the-art results on a variety of molecular datasets.
Author Information
Cristian Bodnar (University of Cambridge)
Fabrizio Frasca (Twitter)
Nina Otter (UCLA)
Yuguang Wang (Shanghai Jiao Tong University; University of New South Wales)
Pietro Liò (University of Cambridge)
Guido Montufar (UCLA / MPI MIS)
Guido is an Assistant Professor at the Departments of Mathematics and Statistics at the University of California, Los Angeles (UCLA), and he is the principal investigator in the ERC Project „Deep Learning Theory: Geometric Analysis of Capacity, Optimization, and Generalization for Improving Learning in Deep Neural Networks" at the Max Planck Institute for Mathematics in the Sciences. Guido Montúfar is interested in mathematical machine learning, especially the interplay of model capacity, optimization, and generalization in deep learning. In current projects, he investigates optimization landscapes and regularization strategies for neural networks, intrinsic motivation in reinforcement learning, information theoretic approaches to learning data representations, information geometric and optimal transportation approaches to generative modelling, and algebraic geometric approaches to graphical models with hidden variables.
Michael Bronstein (Imperial College London / Twitter)
More from the Same Authors
-
2021 : Interaction data are identifiable even across long periods of time »
Ana-Maria Cretu · Federico Monti · Stefano Marrone · Xiaowen Dong · Michael Bronstein · Yves-Alexandre Montjoye -
2021 : Power-law asymptotics of the generalization error for GP regression under power-law priors and targets »
Hui Jin · Pradeep Kr. Banerjee · Guido Montufar -
2021 : Learning Graph Search Heuristics »
Michal Pándy · Rex Ying · Gabriele Corso · Petar Veličković · Jure Leskovec · Pietro Liò -
2021 : GRAND: Graph Neural Diffusion »
Benjamin Chamberlain · James Rowbottom · Maria Gorinova · Stefan Webb · Emanuele Rossi · Michael Bronstein -
2021 : Neural ODE Processes: A Short Summary »
Alexander Norcliffe · Cristian Bodnar · Ben Day · Jacob Moss · Pietro Lió -
2021 : On Second Order Behaviour in Augmented Neural ODEs: A Short Summary »
Alexander Norcliffe · Cristian Bodnar · Ben Day · Nikola Simidjievski · Pietro Lió -
2021 : Invited Talk 1: Michael Bronstein: Geometric deep learning for functional protein design »
Michael Bronstein -
2021 Poster: Beltrami Flow and Neural Diffusion on Graphs »
Benjamin Chamberlain · James Rowbottom · Davide Eynard · Francesco Di Giovanni · Xiaowen Dong · Michael Bronstein -
2021 Poster: Neural Distance Embeddings for Biological Sequences »
Gabriele Corso · Zhitao Ying · Michal Pándy · Petar Veličković · Jure Leskovec · Pietro Liò -
2021 Poster: On the Expected Complexity of Maxout Networks »
Hanna Tseran · Guido Montufar -
2021 Poster: Partition and Code: learning how to compress graphs »
Giorgos Bouritsas · Andreas Loukas · Nikolaos Karalias · Michael Bronstein -
2020 : Keynote 6: Guido Montufar »
Guido Montufar -
2020 : Session 1 | Invited talk: Michael Bronstein, "Geometric Deep Learning for Functional Protein Design" »
Michael Bronstein · Atilim Gunes Baydin -
2020 : Contributed Talk 4: Directional Graph Networks »
Dominique Beaini · Saro Passaro · Vincent Létourneau · Will Hamilton · Gabriele Corso · Pietro Liò -
2020 Poster: Path Integral Based Convolution and Pooling for Graph Neural Networks »
Zheng Ma · Junyu Xuan · Yuguang Wang · Ming Li · Pietro Liò -
2020 Poster: On Second Order Behaviour in Augmented Neural ODEs »
Alexander Norcliffe · Cristian Bodnar · Ben Day · Nikola Simidjievski · Pietro Lió -
2020 Poster: Principal Neighbourhood Aggregation for Graph Nets »
Gabriele Corso · Luca Cavalleri · Dominique Beaini · Pietro Liò · Petar Veličković -
2020 Poster: Fast geometric learning with symbolic matrices »
Jean Feydy · Alexis Glaunès · Benjamin Charlier · Michael Bronstein -
2020 Spotlight: Fast geometric learning with symbolic matrices »
Jean Feydy · Alexis Glaunès · Benjamin Charlier · Michael Bronstein -
2019 : Poster Session #2 »
Yunzhu Li · Peter Meltzer · Jianing Sun · Guillaume SALHA · Marin Vlastelica Pogančić · Chia-Cheng Liu · Fabrizio Frasca · Marc-Alexandre Côté · Vikas Verma · Abdulkadir CELIKKANAT · Pierluca D'Oro · Priyesh Vijayan · Maria Schuld · Petar Veličković · Kshitij Tayal · Yulong Pei · Hao Xu · Lei Chen · Pengyu Cheng · Ines Chami · Dongkwan Kim · Guilherme Gomes · Lukasz Maziarka · Jessica Hoffmann · Ron Levie · Antonia Gogoglou · Shunwang Gong · Federico Monti · Wenlin Wang · Yan Leng · Salvatore Vivona · Daniel Flam-Shepherd · Chester Holtz · Li Zhang · MAHMOUD KHADEMI · I-Chung Hsieh · Aleksandar Stanić · Ziqiao Meng · Yuhang Jiao