Skip to yearly menu bar Skip to main content


Sophia Sanborn Invited Talk
in
Workshop: UniReps: Unifying Representations in Neural Models

Symmetry and Universality

[ ]
Fri 15 Dec 7 a.m. PST — 7:30 a.m. PST

Abstract:

Artificial neural networks trained on natural data exhibit a striking phenomenon: regardless of exact initialization, dataset, or training objective, models trained on the same data domain frequently converge to similar learned representations. This phenomenon is known as convergent learning. The first layers of diverse image models, for example, tend to learn Gabor filters and color-contrast detectors. Remarkably, many of these same features are observed in the visual cortex, suggesting the existence of “universal” representations that transcend biological and artificial substrate. In this talk, I will present theoretical work that explains the phenomenon of convergent learning as a byproduct of the symmetries of natural data—i.e. the transformations that leave perceptual content invariant. We provide a mathematical proof that certain features (i.e. harmonics) are guaranteed to emerge in neural networks trained on tasks that require invariance to the actions of a group of transformations, such as translation or rotation. Given the centrality of invariance to many machine learning tasks, our proof is able to explain a broad class of convergent learning phenomena, in a broad class of neural network architectures, while providing a new perspective on classical neuroscience results. This work sets a foundation for understanding neural representations in brains and machines with the mathematics of symmetry.

Chat is not available.