Skip to yearly menu bar Skip to main content

Workshop: UniReps: Unifying Representations in Neural Models

Linearly Structured World Representations in Maze-Solving Transformers

Michael Ivanitskiy · Alexander Spies · Tilman Räuker · Guillaume Corlouer · Christopher Mathwin · Lucia Quirke · Can Rager · Rusheb Shah · Dan Valentine · Cecilia Diniz Behn · Katsumi Inoue · Samy Wu Fung

[ ] [ Project Page ]
presentation: UniReps: Unifying Representations in Neural Models
Fri 15 Dec 6:15 a.m. PST — 3:15 p.m. PST


The emergence of seemingly similar representations across tasks and neural architectures suggests that convergent properties may underlie sophisticated behavior. One form of representation that seems particularly fundamental to reasoning in many artificial (and perhaps natural) networks is the formation of world models, which decompose observed task structures into re-usable perceptual primitives and task-relevant relations. In this work, we show that auto-regressive transformers tasked with solving mazes learn to linearly represent the structure of mazes, and that the formation of these representations coincides with a sharp increase in generalization performance. Furthermore, we find preliminary evidence for Adjacency Heads which may play a role in computing valid paths through mazes.

Chat is not available.