Convergence and Stability of Graph Convolutional Networks on Large Random Graphs
Nicolas Keriven, Alberto Bietti, Samuel Vaiter
Spotlight presentation: Orals & Spotlights Track 26: Graph/Relational/Theory
on Thu, Dec 10th, 2020 @ 15:50 – 16:00 GMT
on Thu, Dec 10th, 2020 @ 15:50 – 16:00 GMT
Poster Session 6 (more posters)
on Thu, Dec 10th, 2020 @ 17:00 – 19:00 GMT
GatherTown: Graph Neural Network ( Town B1 - Spot B1 )
on Thu, Dec 10th, 2020 @ 17:00 – 19:00 GMT
GatherTown: Graph Neural Network ( Town B1 - Spot B1 )
Join GatherTown
Only iff poster is crowded, join Zoom . Authors have to start the Zoom call from their Profile page / Presentation History.
Only iff poster is crowded, join Zoom . Authors have to start the Zoom call from their Profile page / Presentation History.
Toggle Abstract Paper (in Proceedings / .pdf)
Abstract: We study properties of Graph Convolutional Networks (GCNs) by analyzing their behavior on standard models of random graphs, where nodes are represented by random latent variables and edges are drawn according to a similarity kernel. This allows us to overcome the difficulties of dealing with discrete notions such as isomorphisms on very large graphs, by considering instead more natural geometric aspects. We first study the convergence of GCNs to their continuous counterpart as the number of nodes grows. Our results are fully non-asymptotic and are valid for relatively sparse graphs with an average degree that grows logarithmically with the number of nodes. We then analyze the stability of GCNs to small deformations of the random graph model. In contrast to previous studies of stability in discrete settings, our continuous setup allows us to provide more intuitive deformation-based metrics for understanding stability, which have proven useful for explaining the success of convolutional representations on Euclidean domains.