Skip to yearly menu bar Skip to main content


Poster

How Far Can Transformers Reason? The Locality Barrier and Inductive Scratchpad

Emmanuel Abbe · Samy Bengio · Aryo Lotfi · Colin Sandon · Omid Saremi


Abstract:

Can Transformers predict new syllogisms by composing established ones? More generally, what type of targets can be learned by such models from scratch? Recent works show that Transformers can be Turing-complete in terms of expressivity, but this does not address the learnability objective. This paper puts forward the notion of 'distribution locality' to capture when weak learning is efficiently achievable by regular Transformers, where the locality measures the least number of tokens required in addition to the tokens histogram to correlate nontrivially with the target. As shown experimentally and theoretically under additional assumptions, distributions with high locality cannot be learned efficiently. In particular, syllogisms cannot be composed on long chains. Furthermore, we argue that (i) an agnostic scratchpad cannot help to break the locality, (ii) an educated scratchpad can help if it breaks the locality at each step, (iii) a notion of 'inductive scratchpad' can both break the locality and help with out-of-distribution generalization.

Live content is unavailable. Log in and register to view live content