Skip to yearly menu bar Skip to main content

Invited Talk
Workshop: Generalization in Planning (GenPlan '23)

In-Context Learning of Sequential Decision-Making Tasks

Roberta Raileanu

[ ]
Sat 16 Dec 2 p.m. PST — 2:35 p.m. PST


Training autonomous agents that can learn new tasks from only a handful of demonstrations is a long-standing problem in machine learning. Recently, transformers have been shown to learn new language or vision tasks without any weight updates from only a few examples, also referred to as in-context learning. However, the sequential decision-making setting poses additional challenges having a lower tolerance for errors since the environment's stochasticity or the agent's actions can lead to unseen, and sometimes unrecoverable, states. In this talk, I will show that naively applying transformers to this setting does not enable in-context learning of new tasks. I will then show how different design choices such as the model size, data diversity, environment stochasticity, and trajectory burstiness, affect in-context learning of sequential decision-making tasks. Finally, I will show that by training on large diverse offline datasets, transformers are able to learn entirely new tasks with unseen states, actions, dynamics, and rewards, using only a handful of demonstrations and no weight updates. I will end my talk with a discussion of the limitations of offline learning approaches in sequential decision-making and some directions for future work.

Chat is not available.