Timezone: »

 
Poster
Exploring Length Generalization in Large Language Models
Cem Anil · Yuhuai Wu · Anders Andreassen · Aitor Lewkowycz · Vedant Misra · Vinay Ramasesh · Ambrose Slone · Guy Gur-Ari · Ethan Dyer · Behnam Neyshabur

Tue Nov 29 02:00 PM -- 04:00 PM (PST) @ Hall J #107

The ability to extrapolate from short problem instances to longer ones is an important form of out-of-distribution generalization in reasoning tasks, and is crucial when learning from datasets where longer problem instances are rare. These include theorem proving, solving quantitative mathematics problems, and reading/summarizing novels. In this paper, we run careful empirical studies exploring the length generalization capabilities of transformer-based language models. We first establish that naively finetuning transformers on length generalization tasks shows significant generalization deficiencies independent of model scale. We then show that combining pretrained large language models' in-context learning abilities with scratchpad prompting (asking the model to output solution steps before producing an answer) results in a dramatic improvement in length generalization. We run careful failure analyses on each of the learning modalities and identify common sources of mistakes that highlight opportunities in equipping language models with the ability to generalize to longer problems.

Author Information

Cem Anil (University of Toronto)

I'm a first year PhD student at the University of Toronto and Vector Institute, supervised by Roger Grosse and Geoffrey Hinton.

Yuhuai Wu (Google)
Anders Andreassen (Google)
Aitor Lewkowycz (Inflection AI)
Vedant Misra (Google)
Vinay Ramasesh (Google)
Ambrose Slone (Google)

Currently at Google X as a part of team doing deep learning research. Formerly at Apple working on computer vision and deep learning.

Guy Gur-Ari (Google)
Ethan Dyer (Blueshift, Google Research)
Behnam Neyshabur (Google)

More from the Same Authors