Timezone: »
Imitation Learning (IL) algorithms are typically evaluated in the same environment that was used to create demonstrations. This rewards precise reproduction of demonstrations in one particular environment, but provides little information about how robustly an algorithm can generalise the demonstrator's intent to substantially different deployment settings. This paper presents the MAGICAL benchmark suite, which permits systematic evaluation of generalisation by quantifying robustness to different kinds of distribution shift that an IL algorithm is likely to encounter in practice. Using the MAGICAL suite, we confirm that existing IL algorithms overfit significantly to the context in which demonstrations are provided. We also show that standard methods for reducing overfitting are effective at creating narrow perceptual invariances, but are not sufficient to enable transfer to contexts that require substantially different behaviour, which suggests that new approaches will be needed in order to robustly generalise demonstrator intent. Code and data for the MAGICAL suite is available at https://github.com/qxcv/magical/
Author Information
Sam Toyer (UC Berkeley)
Rohin Shah (UC Berkeley)
Rohin is a Research Scientist on the technical AGI safety team at DeepMind. He completed his PhD at the Center for Human-Compatible AI at UC Berkeley, where he worked on building AI systems that can learn to assist a human user, even if they don't initially know what the user wants. He is particularly interested in big picture questions about artificial intelligence. What techniques will we use to build human-level AI systems? How will their deployment affect the world? What can we do to make this deployment go better? He writes up summaries and thoughts about recent work tackling these questions in the Alignment Newsletter.
Andrew Critch (UC Berkeley)
Stuart Russell (UC Berkeley)
More from the Same Authors
-
2021 Spotlight: Uncertain Decisions Facilitate Better Preference Learning »
Cassidy Laidlaw · Stuart Russell -
2021 Spotlight: Optimal Policies Tend To Seek Power »
Alex Turner · Logan Smith · Rohin Shah · Andrew Critch · Prasad Tadepalli -
2021 : An Empirical Investigation of Representation Learning for Imitation »
Cynthia Chen · Sam Toyer · Cody Wild · Scott Emmons · Ian Fischer · Kuang-Huei Lee · Neel Alex · Steven Wang · Ping Luo · Stuart Russell · Pieter Abbeel · Rohin Shah -
2021 : Cross-Domain Imitation Learning via Optimal Transport »
Arnaud Fickinger · Samuel Cohen · Stuart Russell · Brandon Amos -
2022 : Adversarial Policies Beat Professional-Level Go AIs »
Tony Wang · Adam Gleave · Nora Belrose · Tom Tseng · Michael Dennis · Yawen Duan · Viktor Pogrebniak · Joseph Miller · Sergey Levine · Stuart Russell -
2021 : NeurIPS RL Competitions Results Presentations »
Rohin Shah · Liam Paull · Tabitha Lee · Tim Rocktäschel · Heinrich Küttler · Sharada Mohanty · Manuel Wuethrich -
2021 : BASALT: A MineRL Competition on Solving Human-Judged Task + Q&A »
Rohin Shah · Cody Wild · Steven Wang · Neel Alex · Brandon Houghton · William Guss · Sharada Mohanty · Stephanie Milani · Nicholay Topin · Pieter Abbeel · Stuart Russell · Anca Dragan -
2021 Poster: Scalable Online Planning via Reinforcement Learning Fine-Tuning »
Arnaud Fickinger · Hengyuan Hu · Brandon Amos · Stuart Russell · Noam Brown -
2021 Poster: Uncertain Decisions Facilitate Better Preference Learning »
Cassidy Laidlaw · Stuart Russell -
2021 Poster: Bridging Offline Reinforcement Learning and Imitation Learning: A Tale of Pessimism »
Paria Rashidinejad · Banghua Zhu · Cong Ma · Jiantao Jiao · Stuart Russell -
2021 Poster: MADE: Exploration via Maximizing Deviation from Explored Regions »
Tianjun Zhang · Paria Rashidinejad · Jiantao Jiao · Yuandong Tian · Joseph Gonzalez · Stuart Russell -
2021 : Diamond: A MineRL Competition on Training Sample-Efficient Agents + Q&A »
William Guss · Alara Dirik · Byron Galbraith · Brandon Houghton · Anssi Kanervisto · Noboru Kuno · Stephanie Milani · Sharada Mohanty · Karolis Ramanauskas · Ruslan Salakhutdinov · Rohin Shah · Nicholay Topin · Steven Wang · Cody Wild -
2021 Poster: Optimal Policies Tend To Seek Power »
Alex Turner · Logan Smith · Rohin Shah · Andrew Critch · Prasad Tadepalli -
2020 : Spotlight Talk: Benefits of Assistance over Reward Learning »
Rohin Shah -
2020 Workshop: Navigating the Broader Impacts of AI Research »
Carolyn Ashurst · Rosie Campbell · Deborah Raji · Solon Barocas · Stuart Russell -
2020 Poster: SLIP: Learning to Predict in Unknown Dynamical Systems with Long-Term Memory »
Paria Rashidinejad · Jiantao Jiao · Stuart Russell -
2020 Oral: SLIP: Learning to Predict in Unknown Dynamical Systems with Long-Term Memory »
Paria Rashidinejad · Jiantao Jiao · Stuart Russell -
2020 Poster: Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design »
Michael Dennis · Natasha Jaques · Eugene Vinitsky · Alexandre Bayen · Stuart Russell · Andrew Critch · Sergey Levine -
2020 Oral: Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design »
Michael Dennis · Natasha Jaques · Eugene Vinitsky · Alexandre Bayen · Stuart Russell · Andrew Critch · Sergey Levine -
2019 Poster: On the Utility of Learning about Humans for Human-AI Coordination »
Micah Carroll · Rohin Shah · Mark Ho · Tom Griffiths · Sanjit Seshia · Pieter Abbeel · Anca Dragan -
2018 Poster: Meta-Learning MCMC Proposals »
Tongzhou Wang · YI WU · Dave Moore · Stuart Russell -
2018 Poster: Negotiable Reinforcement Learning for Pareto Optimal Sequential Decision-Making »
Nishant Desai · Andrew Critch · Stuart J Russell -
2018 Poster: Learning Plannable Representations with Causal InfoGAN »
Thanard Kurutach · Aviv Tamar · Ge Yang · Stuart Russell · Pieter Abbeel