Mechanisms of Symbol Processing in Transformers
Paul Smolensky · Roland Fernandez · Zhenghao Herbert Zhou · Mattia Opper · Adam Davies · Jianfeng Gao
Abstract
We construct a 100% mechanistically-explainable transformer which perfectly performs an in-context learning task that requires inferring, and then reasoning over, latent syntactic structure. It implements a program in a symbolic, Turing-complete language in a family of leading models of the human cognitive architecture.
Video
Chat is not available.
Successful Page Load