Skip to yearly menu bar Skip to main content


Oral Poster

Scale Equivariant Graph Meta Networks

Ioannis Kalogeropoulos · Giorgos Bouritsas · Yannis Panagakis

East Exhibit Hall A-C #3010
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST
 
Oral presentation: Oral Session 5A: Graph Neural Networks
Fri 13 Dec 10 a.m. PST — 11 a.m. PST

Abstract:

This paper pertains to an emerging machine learning paradigm: learning higher- order functions, i.e. functions whose inputs are functions themselves, particularly when these inputs are Neural Networks (NNs). With the growing interest in architectures that process NNs, a recurring design principle has permeated the field: adhering to the permutation symmetries arising from the connectionist structure ofNNs. However, are these the sole symmetries present in NN parameterizations? Zooming into most practical activation functions (e.g. sine, ReLU, tanh) answers this question negatively and gives rise to intriguing new symmetries, which we collectively refer to as scaling symmetries, that is, non-zero scalar multiplications and divisions of weights and biases. In this work, we propose Scale Equivariant Graph MetaNetworks - ScaleGMNs, a framework that adapts the Graph Metanetwork (message-passing) paradigm by incorporating scaling symmetries and thus rendering neuron and edge representations equivariant to valid scalings. We introduce novel building blocks, of independent technical interest, that allow for equivariance or invariance with respect to individual scalar multipliers or their product and use them in all components of ScaleGMN. Furthermore, we prove that, under certain expressivity conditions, ScaleGMN can simulate the forward and backward pass of any input feedforward neural network. Experimental results demonstrate that our method advances the state-of-the-art performance for several datasets and activation functions, highlighting the power of scaling symmetries asan inductive bias for NN processing.

Live content is unavailable. Log in and register to view live content