Timezone: »

 
Poster
Understanding the Representation Power of Graph Neural Networks in Learning Graph Topology
Nima Dehmamy · Albert-Laszlo Barabasi · Rose Yu

Thu Dec 12 05:00 PM -- 07:00 PM (PST) @ East Exhibition Hall B + C #37

To deepen our understanding of graph neural networks, we investigate the representation power of Graph Convolutional Networks (GCN) through the looking glass of graph moments, a key property of graph topology encoding path of various lengths. We find that GCNs are rather restrictive in learning graph moments. Without careful design, GCNs can fail miserably even with multiple layers and nonlinear activation functions. We analyze theoretically the expressiveness of GCNs, arriving at a modular GCN design, using different propagation rules. Our modular design is capable of distinguishing graphs from different graph generation models for surprisingly small graphs, a notoriously difficult problem in network science. Our investigation suggests that, depth is much more influential than width and deeper GCNs are more capable of learning higher order graph moments. Additionally, combining GCN modules with different propagation rules is critical to the representation power of GCNs.

Author Information

Nima Dehmamy (Northeastern University)
Albert-Laszlo Barabasi (Northeastern University)
Rose Yu (Northeastern University)

More from the Same Authors