Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Symmetry and Geometry in Neural Representations (NeurReps)

Equivariance with Learned Canonical Mappings

Oumar Kaba · Arnab Mondal · Yan Zhang · Yoshua Bengio · Siamak Ravanbakhsh

Keywords: [ Deep Learning ] [ vision ] [ Symmetry ] [ shape recognition ] [ Group Theory ] [ Equivariance ]


Abstract:

Symmetry-based neural networks often constrain the architecture in order to achieve invariance or equivariance to a group of transformations. In this paper, we propose an alternative that avoids this architectural constraint by learning to produce canonical representation of the data. These canonical mappings can readily be plugged into non-equivariant backbone architectures. We offer explicit ways to implement them for many groups of interest. We show that this approach enjoys universality while providing interpretable insights. Our main hypothesis is that learning a neural network to perform the canonicalization will perform better than doing it using predefined heuristics. Our results show that learning the canonical mappings indeed leads to better results and that the approach achieves great performance in practice.

Chat is not available.