Skip to yearly menu bar Skip to main content


Poster

Provable Guarantees for Model Performance via Mechanistic Intepretability

Jason Gross · Rajashree Agrawal · Thomas Kwa · Euan Ong · Chun Hei Yip · Alex Gibson · Soufiane Noubir · Lawrence Chan

East Exhibit Hall A-C #3106
[ ] [ Project Page ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract: In this work, we propose using mechanistic interpretability -- techniques for reverse engineering model weights into human-interpretable algorithms -- to derive and compactly prove formal guarantees on model performance.We prototype this approach by formally lower bounding the accuracy of 151 small transformers trained on a Max-of-$k$ task. We create 102 different computer-assisted proof strategies and assess their length and tightness of bound on each of our models. Using quantitative metrics, we show that shorter proofs seem to require and provide more mechanistic understanding, and that more faithful mechanistic understanding leads to tighter performance bounds. We confirm these connections by qualitatively examining a subset of our proofs.Finally, we identify compounding structureless noise as a key challenge for using mechanistic interpretability to generate compact proofs on model performance.

Live content is unavailable. Log in and register to view live content