Sketch-GNN: Scalable Graph Neural Networks with Sublinear Training Complexity

Mucong Ding · Tahseen Rabbani · Bang An · Evan Wang · Furong Huang

Hall J #220

Keywords: [ Scalability ] [ Sublinear Complexity ] [ Tensor Sketch ] [ graph neural networks ]

[ Abstract ]
[ Paper [ Poster [ OpenReview
Thu 1 Dec 2 p.m. PST — 4 p.m. PST


Graph Neural Networks (GNNs) are widely applied to graph learning problems such as node classification. When scaling up the underlying graphs of GNNs to a larger size, we are forced to either train on the complete graph and keep the full graph adjacency and node embeddings in memory (which is often infeasible) or mini-batch sample the graph (which results in exponentially growing computational complexities with respect to the number of GNN layers). Various sampling-based and historical-embedding-based methods are proposed to avoid this exponential growth of complexities. However, none of these solutions eliminates the linear dependence on graph size. This paper proposes a sketch-based algorithm whose training time and memory grow sublinearly with respect to graph size by training GNNs atop a few compact sketches of graph adjacency and node embeddings. Based on polynomial tensor-sketch (PTS) theory, our framework provides a novel protocol for sketching non-linear activations and graph convolution matrices in GNNs, as opposed to existing methods that sketch linear weights or gradients in neural networks. In addition, we develop a locality-sensitive hashing (LSH) technique that can be trained to improve the quality of sketches. Experiments on large-graph benchmarks demonstrate the scalability and competitive performance of our Sketch-GNNs versus their full-size GNN counterparts.

Chat is not available.