Neel Nanda: Sparse Autoencoders - Assessing the evidence
Neel Nanda
2024 Invited Talk
in
Workshop: Interpretable AI: Past, Present and Future
in
Workshop: Interpretable AI: Past, Present and Future
Abstract
Sparse autoencoders are a technique for interpreting which concepts are represented in a model's activations, and have been a big focus of recent mechanistic interpretability work. In this talk, Neel will assess what we've learned about how well sparse autoencoders work over the past year, the biggest problems with them, and what he sees as next steps for the field.
Video
Chat is not available.
Successful Page Load