Skip to yearly menu bar Skip to main content


Andrew Saxe Invited Talk
in
Workshop: UniReps: Unifying Representations in Neural Models

When representations align: Universality in representation learning dynamics

[ ]
Fri 15 Dec 7:30 a.m. PST — 8 a.m. PST

Abstract:

Deep neural networks come in many sizes and architectures. The choice of architecture, in conjunction with the dataset and learning algorithm, affects the learned neural representations. Yet recent results have shown that different architectures learn representations with striking qualitative similarities. Vision transformers, for instance, align with human neural responses to natural images about as well as trained convolutional neural networks. Why might different systems learn similar representations? Here we derive an effective theory of representation learning under the assumption that the encoding map from input to hidden representation and decoding map from representation to output are arbitrary smooth functions. This theory schematizes representation learning dynamics in the regime of complex, large architectures, where hidden representations are not strongly constrained by the parametrization. We show through experiments that the effective theory describes aspects of representation learning dynamics across a range of deep networks with different activation functions and architectures, and exhibits phenomena similar to the 'rich' and 'lazy' regime. While many network behaviors depend quantitatively on architecture, our findings point to certain behaviors that are widely conserved once models are sufficiently flexible.

Chat is not available.