Timezone: »
Poster
Learning Partial Equivariances From Data
David W. Romero · Suhas Lohit
Group Convolutional Neural Networks (G-CNNs) constrain learned features to respect the symmetries in the selected group, and lead to better generalization when these symmetries appear in the data. If this is not the case, however, equivariance leads to overly constrained models and worse performance. Frequently, transformations occurring in data can be better represented by a subset of a group than by a group as a whole, e.g., rotations in $[-90^{\circ}, 90^{\circ}]$. In such cases, a model that respects equivariance partially is better suited to represent the data. In addition, relevant transformations may differ for low and high-level features. For instance, full rotation equivariance is useful to describe edge orientations in a face, but partial rotation equivariance is better suited to describe face poses relative to the camera. In other words, the optimal level of equivariance may differ per layer. In this work, we introduce Partial G-CNNs: G-CNNs able to learn layer-wise levels of partial and full equivariance to discrete, continuous groups and combinations thereof as part of training. Partial G-CNNs retain full equivariance when beneficial, e.g., for rotated MNIST, but adjust it whenever it becomes harmful, e.g., for classification of 6/9 digits or natural images. We empirically show that partial G-CNNs pair G-CNNs when full equivariance is advantageous, and outperform them otherwise. Our code is publicly available at www.github.com/merlresearch/partial_gcnn .
Author Information
David W. Romero (Vrije Universiteit Amsterdam)
Suhas Lohit (Mitsubishi Electric Research Labs)
More from the Same Authors
-
2022 Poster: Relaxing Equivariance Constraints with Non-stationary Continuous Filters »
Tycho van der Ouderaa · David W. Romero · Mark van der Wilk -
2022 Poster: What Makes a "Good" Data Augmentation in Knowledge Distillation - A Statistical Perspective »
Huan Wang · Suhas Lohit · Michael Jones · Yun Fu