Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: UniReps: Unifying Representations in Neural Models

Comparing Representational and Functional Similarity in Small Transformer Language Models

Dan Friedman · Andrew Lampinen · Lucas Dixon · Danqi Chen · Asma Ghandeharioun

[ ] [ Project Page ]
 
presentation: UniReps: Unifying Representations in Neural Models
Fri 15 Dec 6:15 a.m. PST — 3:15 p.m. PST

Abstract:

In many situations, it would be helpful to be able to characterize the solution learned by a neural network, including for answering scientific questions (e.g. how do architecture changes affect generalization) and addressing practical concerns (e.g. auditing for potentially unsafe behavior). One approach is to try to understand these models by studying the representations that they learn---for example, comparing whether two networks learn similar representations. However, it is not always clear how much representation-level analyses can tell us about how a model makes predictions. In this work, we explore this question in the context of small Transformer language models, which we train on a synthetic, hierarchical language task. We train models with different sizes and random initializations, evaluating performance over the course of training and on a variety of systematic generalization splits. We find that existing methods for measuring representation similarity are not always correlated with behavioral metrics---i.e. models with similar representations do not always make similar predictions---and the results vary depending on the choice of representation. Our results highlight the importance of understanding representations in terms of the role they play in the neural algorithm.

Chat is not available.