Timezone: »
A wide range of reinforcement learning (RL) problems --- including robustness, transfer learning, unsupervised RL, and emergent complexity --- require specifying a distribution of tasks or environments in which a policy will be trained. However, creating a useful distribution of environments is error prone, and takes a significant amount of developer time and effort. We propose Unsupervised Environment Design (UED) as an alternative paradigm, where developers provide environments with unknown parameters, and these parameters are used to automatically produce a distribution over valid, solvable environments. Existing approaches to automatically generating environments suffer from common failure modes: domain randomization cannot generate structure or adapt the difficulty of the environment to the agent's learning progress, and minimax adversarial training leads to worst-case environments that are often unsolvable. To generate structured, solvable environments for our protagonist agent, we introduce a second, antagonist agent that is allied with the environment-generating adversary. The adversary is motivated to generate environments which maximize regret, defined as the difference between the protagonist and antagonist agent's return. We call our technique Protagonist Antagonist Induced Regret Environment Design (PAIRED). Our experiments demonstrate that PAIRED produces a natural curriculum of increasingly complex environments, and PAIRED agents achieve higher zero-shot transfer performance when tested in highly novel environments.
Author Information
Michael Dennis (University of California Berkeley)
Natasha Jaques (Google Brain, UC Berkeley)
Eugene Vinitsky (UC Berkeley)
Alexandre Bayen (UC Berkeley)
Stuart Russell (UC Berkeley)
Andrew Critch (UC Berkeley)
Sergey Levine (UC Berkeley)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Oral: Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design »
Tue Dec 8th 02:30 -- 02:45 AM Room Orals & Spotlights: Reinforcement Learning
More from the Same Authors
-
2020 Workshop: Navigating the Broader Impacts of AI Research »
Carolyn Ashurst · Rosie Campbell · Deborah Raji · Solon Barocas · Stuart Russell -
2020 Poster: Model Inversion Networks for Model-Based Optimization »
Aviral Kumar · Sergey Levine -
2020 Poster: Continual Learning of Control Primitives : Skill Discovery via Reset-Games »
Kelvin Xu · Siddharth Verma · Chelsea Finn · Sergey Levine -
2020 Poster: Gradient Surgery for Multi-Task Learning »
Tianhe Yu · Saurabh Kumar · Abhishek Gupta · Sergey Levine · Karol Hausman · Chelsea Finn -
2020 Poster: Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement »
Ben Eysenbach · XINYANG GENG · Sergey Levine · Russ Salakhutdinov -
2020 Poster: Conservative Q-Learning for Offline Reinforcement Learning »
Aviral Kumar · Aurick Zhou · George Tucker · Sergey Levine -
2020 Oral: Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement »
Ben Eysenbach · XINYANG GENG · Sergey Levine · Russ Salakhutdinov -
2020 Tutorial: (Track3) Offline Reinforcement Learning: From Algorithm Design to Practical Applications Q&A »
Sergey Levine · Aviral Kumar -
2020 Poster: Gamma-Models: Generative Temporal Difference Learning for Infinite-Horizon Prediction »
Michael Janner · Igor Mordatch · Sergey Levine -
2020 Poster: The MAGICAL Benchmark for Robust Imitation »
Sam Toyer · Rohin Shah · Andrew Critch · Stuart Russell -
2020 Poster: One Solution is Not All You Need: Few-Shot Extrapolation via Structured MaxEnt RL »
Saurabh Kumar · Aviral Kumar · Sergey Levine · Chelsea Finn -
2020 Poster: SLIP: Learning to Predict in Unknown Dynamical Systems with Long-Term Memory »
Paria Rashidinejad · Jiantao Jiao · Stuart Russell -
2020 Poster: Long-Horizon Visual Planning with Goal-Conditioned Hierarchical Predictors »
Karl Pertsch · Oleh Rybkin · Frederik Ebert · Shenghao Zhou · Dinesh Jayaraman · Chelsea Finn · Sergey Levine -
2020 Poster: Stochastic Latent Actor-Critic: Deep Reinforcement Learning with a Latent Variable Model »
Alex X. Lee · Anusha Nagabandi · Pieter Abbeel · Sergey Levine -
2020 Oral: SLIP: Learning to Predict in Unknown Dynamical Systems with Long-Term Memory »
Paria Rashidinejad · Jiantao Jiao · Stuart Russell -
2020 Poster: MOPO: Model-based Offline Policy Optimization »
Tianhe Yu · Garrett Thomas · Lantao Yu · Stefano Ermon · James Zou · Sergey Levine · Chelsea Finn · Tengyu Ma -
2020 Poster: DisCor: Corrective Feedback in Reinforcement Learning via Distribution Correction »
Aviral Kumar · Abhishek Gupta · Sergey Levine -
2020 Spotlight: DisCor: Corrective Feedback in Reinforcement Learning via Distribution Correction »
Aviral Kumar · Abhishek Gupta · Sergey Levine -
2020 Tutorial: (Track3) Offline Reinforcement Learning: From Algorithm Design to Practical Applications »
Sergey Levine · Aviral Kumar -
2019 Workshop: Emergent Communication: Towards Natural Language »
Abhinav Gupta · Michael Noukhovitch · Cinjon Resnick · Natasha Jaques · Angelos Filos · Marie Ossenkopf · Angeliki Lazaridou · Jakob Foerster · Ryan Lowe · Douwe Kiela · Kyunghyun Cho -
2019 Poster: Wasserstein Dependency Measure for Representation Learning »
Sherjil Ozair · Corey Lynch · Yoshua Bengio · Aaron van den Oord · Sergey Levine · Pierre Sermanet -
2019 Poster: Planning with Goal-Conditioned Policies »
Soroush Nasiriany · Vitchyr Pong · Steven Lin · Sergey Levine -
2019 Poster: Search on the Replay Buffer: Bridging Planning and Reinforcement Learning »
Ben Eysenbach · Russ Salakhutdinov · Sergey Levine -
2019 Poster: MCP: Learning Composable Hierarchical Control with Multiplicative Compositional Policies »
Xue Bin Peng · Michael Chang · Grace Zhang · Pieter Abbeel · Sergey Levine -
2019 Poster: Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction »
Aviral Kumar · Justin Fu · George Tucker · Sergey Levine -
2019 Poster: Unsupervised Curricula for Visual Meta-Reinforcement Learning »
Allan Jabri · Kyle Hsu · Abhishek Gupta · Ben Eysenbach · Sergey Levine · Chelsea Finn -
2019 Poster: Compositional Plan Vectors »
Coline Devin · Daniel Geng · Pieter Abbeel · Trevor Darrell · Sergey Levine -
2019 Spotlight: Unsupervised Curricula for Visual Meta-Reinforcement Learning »
Allan Jabri · Kyle Hsu · Abhishek Gupta · Ben Eysenbach · Sergey Levine · Chelsea Finn -
2019 Poster: Causal Confusion in Imitation Learning »
Pim de Haan · Dinesh Jayaraman · Sergey Levine -
2019 Poster: Meta-Learning with Implicit Gradients »
Aravind Rajeswaran · Chelsea Finn · Sham Kakade · Sergey Levine -
2019 Poster: When to Trust Your Model: Model-Based Policy Optimization »
Michael Janner · Justin Fu · Marvin Zhang · Sergey Levine -
2019 Poster: Guided Meta-Policy Search »
Russell Mendonca · Abhishek Gupta · Rosen Kralev · Pieter Abbeel · Sergey Levine · Chelsea Finn -
2019 Spotlight: Guided Meta-Policy Search »
Russell Mendonca · Abhishek Gupta · Rosen Kralev · Pieter Abbeel · Sergey Levine · Chelsea Finn -
2019 Oral: Causal Confusion in Imitation Learning »
Pim de Haan · Dinesh Jayaraman · Sergey Levine -
2019 Poster: Approximating Interactive Human Evaluation with Self-Play for Open-Domain Dialog Systems »
Asma Ghandeharioun · Judy Hanwen Shen · Natasha Jaques · Craig Ferguson · Noah Jones · Agata Lapedriza · Rosalind Picard -
2018 Poster: Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models »
Kurtland Chua · Roberto Calandra · Rowan McAllister · Sergey Levine -
2018 Spotlight: Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models »
Kurtland Chua · Roberto Calandra · Rowan McAllister · Sergey Levine -
2018 Poster: Meta-Learning MCMC Proposals »
Tongzhou Wang · YI WU · Dave Moore · Stuart Russell -
2018 Poster: Probabilistic Model-Agnostic Meta-Learning »
Chelsea Finn · Kelvin Xu · Sergey Levine -
2018 Poster: Negotiable Reinforcement Learning for Pareto Optimal Sequential Decision-Making »
Nishant Desai · Andrew Critch · Stuart J Russell -
2018 Poster: Meta-Reinforcement Learning of Structured Exploration Strategies »
Abhishek Gupta · Russell Mendonca · YuXuan Liu · Pieter Abbeel · Sergey Levine -
2018 Poster: Learning Plannable Representations with Causal InfoGAN »
Thanard Kurutach · Aviv Tamar · Ge Yang · Stuart Russell · Pieter Abbeel -
2018 Poster: Visual Reinforcement Learning with Imagined Goals »
Ashvin Nair · Vitchyr Pong · Murtaza Dalal · Shikhar Bahl · Steven Lin · Sergey Levine -
2018 Spotlight: Visual Reinforcement Learning with Imagined Goals »
Ashvin Nair · Vitchyr Pong · Murtaza Dalal · Shikhar Bahl · Steven Lin · Sergey Levine -
2018 Spotlight: Meta-Reinforcement Learning of Structured Exploration Strategies »
Abhishek Gupta · Russell Mendonca · YuXuan Liu · Pieter Abbeel · Sergey Levine -
2018 Poster: Visual Memory for Robust Path Following »
Ashish Kumar · Saurabh Gupta · David Fouhey · Sergey Levine · Jitendra Malik -
2018 Poster: Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition »
Justin Fu · Avi Singh · Dibya Ghosh · Larry Yang · Sergey Levine -
2018 Oral: Visual Memory for Robust Path Following »
Ashish Kumar · Saurabh Gupta · David Fouhey · Sergey Levine · Jitendra Malik -
2018 Poster: Data-Efficient Hierarchical Reinforcement Learning »
Ofir Nachum · Shixiang (Shane) Gu · Honglak Lee · Sergey Levine -
2018 Poster: Where Do You Think You're Going?: Inferring Beliefs about Dynamics from Behavior »
Sid Reddy · Anca Dragan · Sergey Levine -
2017 Workshop: Workshop on Meta-Learning »
Roberto Calandra · Frank Hutter · Hugo Larochelle · Sergey Levine -
2017 Poster: EX2: Exploration with Exemplar Models for Deep Reinforcement Learning »
Justin Fu · John Co-Reyes · Sergey Levine -
2017 Spotlight: EX2: Exploration with Exemplar Models for Deep Reinforcement Learning »
Justin Fu · John Co-Reyes · Sergey Levine -
2017 Demonstration: Deep Robotic Learning using Visual Imagination and Meta-Learning »
Chelsea Finn · Frederik Ebert · Tianhe Yu · Annie Xie · Sudeep Dasari · Pieter Abbeel · Sergey Levine -
2017 Poster: Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning »
Shixiang (Shane) Gu · Timothy Lillicrap · Richard Turner · Zoubin Ghahramani · Bernhard Schölkopf · Sergey Levine -
2016 Workshop: Deep Learning for Action and Interaction »
Chelsea Finn · Raia Hadsell · David Held · Sergey Levine · Percy Liang -
2016 Demonstration: Interactive musical improvisation with Magenta »
Adam Roberts · Jesse Engel · Curtis Hawthorne · Ian Simon · Elliot Waite · Sageev Oore · Natasha Jaques · Cinjon Resnick · Douglas Eck -
2016 Poster: Adaptive Averaging in Accelerated Descent Dynamics »
Walid Krichene · Alexandre Bayen · Peter Bartlett -
2016 Poster: Minimizing Regret on Reflexive Banach Spaces and Nash Equilibria in Continuous Zero-Sum Games »
Maximilian Balandat · Walid Krichene · Claire Tomlin · Alexandre Bayen -
2016 Poster: Value Iteration Networks »
Aviv Tamar · Sergey Levine · Pieter Abbeel · YI WU · Garrett Thomas -
2016 Oral: Value Iteration Networks »
Aviv Tamar · Sergey Levine · Pieter Abbeel · YI WU · Garrett Thomas -
2015 Poster: Accelerated Mirror Descent in Continuous and Discrete Time »
Walid Krichene · Alexandre Bayen · Peter Bartlett -
2015 Spotlight: Accelerated Mirror Descent in Continuous and Discrete Time »
Walid Krichene · Alexandre Bayen · Peter Bartlett -
2014 Workshop: Novel Trends and Applications in Reinforcement Learning »
Csaba Szepesvari · Marc Deisenroth · Sergey Levine · Pedro Ortega · Brian Ziebart · Emma Brunskill · Naftali Tishby · Gerhard Neumann · Daniel Lee · Sridhar Mahadevan · Pieter Abbeel · David Silver · Vicenç Gómez -
2014 Poster: Learning Neural Network Policies with Guided Policy Search under Unknown Dynamics »
Sergey Levine · Pieter Abbeel -
2014 Spotlight: Learning Neural Network Policies with Guided Policy Search under Unknown Dynamics »
Sergey Levine · Pieter Abbeel -
2013 Poster: Variational Policy Search via Trajectory Optimization »
Sergey Levine · Vladlen Koltun -
2010 Poster: Feature Construction for Inverse Reinforcement Learning »
Sergey Levine · Zoran Popovic · Vladlen Koltun