Timezone: »
Can we use machine learning to compress graph data? The absence of ordering in graphs poses a significant challenge to conventional compression algorithms, limiting their attainable gains as well as their ability to discover relevant patterns. On the other hand, most graph compression approaches rely on domain-dependent handcrafted representations and cannot adapt to different underlying graph distributions. This work aims to establish the necessary principles a lossless graph compression method should follow to approach the entropy storage lower bound. Instead of making rigid assumptions about the graph distribution, we formulate the compressor as a probabilistic model that can be learned from data and generalise to unseen instances. Our “Partition and Code” framework entails three steps: first, a partitioning algorithm decomposes the graph into subgraphs, then these are mapped to the elements of a small dictionary on which we learn a probability distribution, and finally, an entropy encoder translates the representation into bits. All the components (partitioning, dictionary and distribution) are parametric and can be trained with gradient descent. We theoretically compare the compression quality of several graph encodings and prove, under mild conditions, that PnC achieves compression gains that grow either linearly or quadratically with the number of vertices. Empirically, PnC yields significant compression improvements on diverse real-world networks.
Author Information
Giorgos Bouritsas (Imperial College London)
Andreas Loukas (EPFL, MIT)
Researcher fascinated by graphs and machine learning.
Nikolaos Karalias (EPFL)
Michael Bronstein (Imperial College London / Twitter)
More from the Same Authors
-
2021 : Interaction data are identifiable even across long periods of time »
Ana-Maria Cretu · Federico Monti · Stefano Marrone · Xiaowen Dong · Michael Bronstein · Yves-Alexandre Montjoye -
2022 Poster: Neural Set Function Extensions: Learning with Discrete Functions in High Dimensions »
Nikolaos Karalias · Joshua Robinson · Andreas Loukas · Stefanie Jegelka -
2021 : GRAND: Graph Neural Diffusion »
Benjamin Chamberlain · James Rowbottom · Maria Gorinova · Stefan Webb · Emanuele Rossi · Michael Bronstein -
2021 : Invited Talk 1: Michael Bronstein: Geometric deep learning for functional protein design »
Michael Bronstein -
2021 Poster: What training reveals about neural network complexity »
Andreas Loukas · Marinos Poiitis · Stefanie Jegelka -
2021 Poster: Beltrami Flow and Neural Diffusion on Graphs »
Benjamin Chamberlain · James Rowbottom · Davide Eynard · Francesco Di Giovanni · Xiaowen Dong · Michael Bronstein -
2021 Poster: SQALER: Scaling Question Answering by Decoupling Multi-Hop and Logical Reasoning »
Mattia Atzeni · Jasmina Bogojeska · Andreas Loukas -
2021 Poster: Weisfeiler and Lehman Go Cellular: CW Networks »
Cristian Bodnar · Fabrizio Frasca · Nina Otter · Yuguang Wang · Pietro Liò · Guido Montufar · Michael Bronstein -
2020 : Session 1 | Invited talk: Michael Bronstein, "Geometric Deep Learning for Functional Protein Design" »
Michael Bronstein · Atilim Gunes Baydin -
2020 Poster: How hard is to distinguish graphs with graph neural networks? »
Andreas Loukas -
2020 Poster: Building powerful and equivariant graph neural networks with structural message-passing »
Clément Vignac · Andreas Loukas · Pascal Frossard -
2020 Poster: Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial Optimization on Graphs »
Nikolaos Karalias · Andreas Loukas -
2020 Oral: Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial Optimization on Graphs »
Nikolaos Karalias · Andreas Loukas -
2020 Poster: Fast geometric learning with symbolic matrices »
Jean Feydy · Alexis Glaunès · Benjamin Charlier · Michael Bronstein -
2020 Spotlight: Fast geometric learning with symbolic matrices »
Jean Feydy · Alexis Glaunès · Benjamin Charlier · Michael Bronstein