Timezone: »
Large transformer-based models are able to perform in-context few-shot learning, without being explicitly trained for it. This observation raises the question: what aspects of the training regime lead to this emergent behavior? Here, we show that this behavior is driven by the distributions of the training data itself. In-context learning emerges when the training data exhibits particular distributional properties such as burstiness (items appear in clusters rather than being uniformly distributed over time) and having a large number of rarely occurring classes. In-context learning also emerges more strongly when item meanings or interpretations are dynamic rather than fixed. These properties are exemplified by natural language, but are also inherent to naturalistic data in a wide range of other domains. They also depart significantly from the uniform, i.i.d. training distributions typically used for standard supervised learning. In our initial experiments, we found that in-context learning traded off against more conventional weight-based learning, and models were unable to achieve both simultaneously. However, our later experiments uncovered that the two modes of learning could co-exist in a single model when it was trained on data following a skewed Zipfian distribution -- another common property of naturalistic data, including language. In further experiments, we found that naturalistic data distributions were only able to elicit in-context learning in transformers, and not in recurrent models. Our findings indicate how the transformer architecture works together with particular properties of the training data to drive the intriguing emergent in-context learning behaviour of large language models, and indicate how future work might encourage both in-context and in-weights learning in domains beyond language.
Author Information
Stephanie Chan (DeepMind)
Adam Santoro (DeepMind)
Andrew Lampinen (DeepMind)
Jane Wang (DeepMind)
Jane Wang is a research scientist at DeepMind on the neuroscience team, working on meta-reinforcement learning and neuroscience-inspired artificial agents. Her background is in physics, complex systems, and computational and cognitive neuroscience.
Aaditya Singh (University College London, University of London)
Pierre Richemond (DeepMind, Imperial College)
James McClelland (Stanford University)
Felix Hill (Deepmind)
More from the Same Authors
-
2021 : Alchemy: A benchmark and analysis toolkit for meta-reinforcement learning agents »
Jane Wang · Michael King · Nicolas Porcel · Zeb Kurth-Nelson · Tina Zhu · Charles Deck · Peter Choy · Mary Cassin · Malcolm Reynolds · Francis Song · Gavin Buttimore · David Reichert · Neil Rabinowitz · Loic Matthey · Demis Hassabis · Alexander Lerchner · Matt Botvinick -
2021 : Task-driven Discovery of Perceptual Schemas for Generalization in Reinforcement Learning »
Wilka Carvalho · Andrew Lampinen · Kyriacos Nikiforou · Felix Hill · Murray Shanahan -
2021 : Continual with Sujeeth Bharadwaj, Gabriel Silva, Eric Traut, Jane Wang »
Sujeeth Bharadwaj · Jane Wang · Weiwei Yang -
2022 : Systematic Generalization and Emergent Structures in Transformers Trained on Structured Tasks »
Yuxuan Li · James McClelland -
2022 : Transformers generalize differently from information stored in context vs in weights »
Stephanie Chan · Ishita Dasgupta · Junkyung Kim · Dharshan Kumaran · Andrew Lampinen · Felix Hill -
2022 : Learning to Reason With Relational Abstractions »
Andrew Nam · James McClelland · Mengye Ren · Chelsea Finn -
2022 : Out-of-Distribution Generalization in Algorithmic Reasoning Through Curriculum Learning »
Andrew Nam · Mustafa Abdool · Trevor Maxfield · James McClelland -
2022 : Collaborating with language models for embodied reasoning »
Ishita Dasgupta · Christine Kaeser-Chen · Kenneth Marino · Arun Ahuja · Sheila Babayan · Felix Hill · Rob Fergus -
2022 : Collaborating with language models for embodied reasoning »
Ishita Dasgupta · Christine Kaeser-Chen · Kenneth Marino · Arun Ahuja · Sheila Babayan · Felix Hill · Rob Fergus -
2023 Poster: Meta-in-context learning in large language models »
Julian Coda-Forno · Marcel Binz · Zeynep Akata · Matt Botvinick · Jane Wang · Eric Schulz -
2023 Poster: The Transient Nature of Emergent In-context Learning in Transformers »
Aaditya Singh · Stephanie Chan · Ted Moskovitz · Erin Grant · Andrew Saxe · Felix Hill -
2023 Poster: Improving neural network representations using human similarity judgments »
Lukas Muttenthaler · Lorenz Linhardt · Jonas Dippel · Robert Vandermeulen · Katherine Hermann · Andrew Lampinen · Simon Kornblith -
2023 Poster: Discovering Representations for Transfer with Successor Features and the Deep Option Keyboard »
Wilka Carvalho Carvalho · Andre Saraiva · Angelos Filos · Andrew Lampinen · Loic Matthey · Richard L Lewis · Honglak Lee · Satinder Singh · Danilo Jimenez Rezende · Daniel Zoran -
2023 Poster: Passive learning of active causal strategies in agents and language models »
Andrew Lampinen · Stephanie Chan · Ishita Dasgupta · Andrew Nam · Jane Wang -
2023 Workshop: MATH-AI: The 3rd Workshop on Mathematical Reasoning and AI »
Zhenwen Liang · Albert Q. Jiang · Katie Collins · Pan Lu · Kaiyu Yang · Sean Welleck · James McClelland -
2022 : The World is not Uniformly Distributed; Important Implications for Deep RL »
Stephanie Chan -
2022 : Meaning without reference in large language models »
Steven Piantadosi · Felix Hill -
2022 Panel: Panel 2B-3: Data Distributional Properties… & What Can Transformers… »
Dimitris Tsipras · Stephanie Chan -
2022 : Invited Talk: James McClelland »
James McClelland -
2022 Poster: Intra-agent speech permits zero-shot task acquisition »
Chen Yan · Federico Carnevale · Petko I Georgiev · Adam Santoro · Aurelia Guy · Alistair Muldal · Chia-Chun Hung · Joshua Abramson · Timothy Lillicrap · Gregory Wayne -
2022 Poster: Semantic Exploration from Language Abstractions and Pretrained Representations »
Allison Tam · Neil Rabinowitz · Andrew Lampinen · Nicholas Roy · Stephanie Chan · DJ Strouse · Jane Wang · Andrea Banino · Felix Hill -
2021 : Live Q&A Session 2 with Susan Athey, Yoshua Bengio, Sujeeth Bharadwaj, Jane Wang, Joshua Vogelstein, Weiwei Yang »
Susan Athey · Yoshua Bengio · Sujeeth Bharadwaj · Jane Wang · Weiwei Yang · Joshua T Vogelstein -
2021 Workshop: Math AI for Education (MATHAI4ED): Bridging the Gap Between Research and Smart Education »
Pan Lu · Yuhuai Wu · Sean Welleck · Xiaodan Liang · Eric Xing · James McClelland -
2021 Poster: Attention over Learned Object Embeddings Enables Complex Visual Reasoning »
David Ding · Felix Hill · Adam Santoro · Malcolm Reynolds · Matt Botvinick -
2021 Poster: Multimodal Few-Shot Learning with Frozen Language Models »
Maria Tsimpoukelli · Jacob L Menick · Serkan Cabi · S. M. Ali Eslami · Oriol Vinyals · Felix Hill -
2021 Poster: Towards mental time travel: a hierarchical memory for reinforcement learning agents »
Andrew Lampinen · Stephanie Chan · Andrea Banino · Felix Hill -
2021 Oral: Attention over Learned Object Embeddings Enables Complex Visual Reasoning »
David Ding · Felix Hill · Adam Santoro · Malcolm Reynolds · Matt Botvinick -
2020 : Introduction for invited speaker, Frank Hutter »
Jane Wang -
2020 Workshop: Meta-Learning »
Jane Wang · Joaquin Vanschoren · Erin Grant · Jonathan Richard Schwarz · Francesco Visin · Jeff Clune · Roberto Calandra -
2020 Poster: Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning »
Jean-Bastien Grill · Florian Strub · Florent Altché · Corentin Tallec · Pierre Richemond · Elena Buchatskaya · Carl Doersch · Bernardo Avila Pires · Daniel (Zhaohan) Guo · Mohammad Gheshlaghi Azar · Bilal Piot · koray kavukcuoglu · Remi Munos · Michal Valko -
2020 Oral: Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning »
Jean-Bastien Grill · Florian Strub · Florent Altché · Corentin Tallec · Pierre Richemond · Elena Buchatskaya · Carl Doersch · Bernardo Avila Pires · Daniel (Zhaohan) Guo · Mohammad Gheshlaghi Azar · Bilal Piot · koray kavukcuoglu · Remi Munos · Michal Valko -
2020 Poster: What shapes feature representations? Exploring datasets, architectures, and training »
Katherine L. Hermann · Andrew Lampinen -
2020 Tutorial: (Track1) Where Neuroscience meets AI (And What’s in Store for the Future) »
Jane Wang · Kevin Miller · Adam Marblestone -
2019 : Poster Session »
Matthia Sabatelli · Adam Stooke · Amir Abdi · Paulo Rauber · Leonard Adolphs · Ian Osband · Hardik Meisheri · Karol Kurach · Johannes Ackermann · Matt Benatan · GUO ZHANG · Chen Tessler · Dinghan Shen · Mikayel Samvelyan · Riashat Islam · Murtaza Dalal · Luke Harries · Andrey Kurenkov · Konrad Żołna · Sudeep Dasari · Kristian Hartikainen · Ofir Nachum · Kimin Lee · Markus Holzleitner · Vu Nguyen · Francis Song · Christopher Grimm · Felipe Leno da Silva · Yuping Luo · Yifan Wu · Alex Lee · Thomas Paine · Wei-Yang Qu · Daniel Graves · Yannis Flet-Berliac · Yunhao Tang · Suraj Nair · Matthew Hausknecht · Akhil Bagaria · Simon Schmitt · Bowen Baker · Paavo Parmas · Benjamin Eysenbach · Lisa Lee · Siyu Lin · Daniel Seita · Abhishek Gupta · Riley Simmons-Edler · Yijie Guo · Kevin Corder · Vikash Kumar · Scott Fujimoto · Adam Lerer · Ignasi Clavera Gilaberte · Nicholas Rhinehart · Ashvin Nair · Ge Yang · Lingxiao Wang · Sungryull Sohn · J. Fernando Hernandez-Garcia · Xian Yeow Lee · Rupesh Srivastava · Khimya Khetarpal · Chenjun Xiao · Luckeciano Carvalho Melo · Rishabh Agarwal · Tianhe Yu · Glen Berseth · Devendra Singh Chaplot · Jie Tang · Anirudh Srinivasan · Tharun Kumar Reddy Medini · Aaron Havens · Misha Laskin · Asier Mujika · Rohan Saphal · Joseph Marino · Alex Ray · Joshua Achiam · Ajay Mandlekar · Zhuang Liu · Danijar Hafner · Zhiwen Tang · Ted Xiao · Michael Walton · Jeff Druce · Ferran Alet · Zhang-Wei Hong · Stephanie Chan · Anusha Nagabandi · Hao Liu · Hao Sun · Ge Liu · Dinesh Jayaraman · John Co-Reyes · Sophia Sanborn -
2019 : Panel Discussion led by Grace Lindsay »
Grace Lindsay · Blake Richards · Doina Precup · Jacqueline Gottlieb · Jeff Clune · Jane Wang · Richard Sutton · Angela Yu · Ida Momennejad -
2019 : Coffee Break & Poster Session »
Samia Mohinta · Andrea Agostinelli · Alexandra Moringen · Jee Hang Lee · Yat Long Lo · Wolfgang Maass · Blue Sheffer · Colin Bredenberg · Benjamin Eysenbach · Liyu Xia · Efstratios Markou · Jan Lichtenberg · Pierre Richemond · Tony Zhang · JB Lanier · Baihan Lin · William Fedus · Glen Berseth · Marta Sarrico · Matthew Crosby · Stephen McAleer · Sina Ghiassian · Franz Scherr · Guillaume Bellec · Darjan Salaj · Arinbjörn Kolbeinsson · Matthew Rosenberg · Jaehoon Shin · Sang Wan Lee · Guillermo Cecchi · Irina Rish · Elias Hajek -
2019 : Invited Talk #1: From brains to agents and back »
Jane Wang -
2019 Workshop: Meta-Learning »
Roberto Calandra · Ignasi Clavera Gilaberte · Frank Hutter · Joaquin Vanschoren · Jane Wang -
2018 Poster: Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures »
Sergey Bartunov · Adam Santoro · Blake Richards · Luke Marris · Geoffrey E Hinton · Timothy Lillicrap -
2018 Poster: Neural Arithmetic Logic Units »
Andrew Trask · Felix Hill · Scott Reed · Jack Rae · Chris Dyer · Phil Blunsom -
2018 Poster: Relational recurrent neural networks »
Adam Santoro · Ryan Faulkner · David Raposo · Jack Rae · Mike Chrzanowski · Theophane Weber · Daan Wierstra · Oriol Vinyals · Razvan Pascanu · Timothy Lillicrap -
2017 : Panel Discussion »
Felix Hill · Olivier Pietquin · Jack Gallant · Raymond Mooney · Sanja Fidler · Chen Yu · Devi Parikh -
2017 : Grounded Language Learning in a Simulated 3D World »
Felix Hill -
2017 Poster: A simple neural network module for relational reasoning »
Adam Santoro · David Raposo · David Barrett · Mateusz Malinowski · Razvan Pascanu · Peter Battaglia · Timothy Lillicrap -
2017 Spotlight: A simple neural network module for relational reasoning »
Adam Santoro · David Raposo · David Barrett · Mateusz Malinowski · Razvan Pascanu · Peter Battaglia · Timothy Lillicrap