Skip to yearly menu bar Skip to main content


Simon Kornblith Invited Talk
in
Workshop: UniReps: Unifying Representations in Neural Models

Local and global structure in neural network representations


Abstract:

Empirically and sometimes even theoretically, neural network training objectives lead similar data points to cluster near each other in learned representations. However, the global structure of these representations, i.e., the locations of clusters of similar points, is typically less constrained. In this talk, I'll first present results from our recent work demonstrating that this global structure is important for downstream tasks that require learning from few examples, and it can be substantially improved with six orders of magnitude fewer data points than were used for pretraining. I'll then discuss implications of this local/global structure dichotomy for measurement of similarity between neural network representations.

Chat is not available.