Timezone: »
Exploration is a fundamental challenge in reinforcement learning (RL). Many current exploration methods for deep RL use task-agnostic objectives, such as information gain or bonuses based on state visitation. However, many practical applications of RL involve learning more than a single task, and prior tasks can be used to inform how exploration should be performed in new tasks. In this work, we study how prior tasks can inform an agent about how to explore effectively in new situations. We introduce a novel gradient-based fast adaptation algorithm – model agnostic exploration with structured noise (MAESN) – to learn exploration strategies from prior experience. The prior experience is used both to initialize a policy and to acquire a latent exploration space that can inject structured stochasticity into a policy, producing exploration strategies that are informed by prior knowledge and are more effective than random action-space noise. We show that MAESN is more effective at learning exploration strategies when compared to prior meta-RL methods, RL without learned exploration strategies, and task-agnostic exploration methods. We evaluate our method on a variety of simulated tasks: locomotion with a wheeled robot, locomotion with a quadrupedal walker, and object manipulation.
Author Information
Abhishek Gupta (University of California, Berkeley)
Russell Mendonca (UC Berkeley)
YuXuan Liu (UC Berkeley)
Pieter Abbeel (UC Berkeley | Gradescope | Covariant)
Pieter Abbeel is Professor and Director of the Robot Learning Lab at UC Berkeley [2008- ], Co-Director of the Berkeley AI Research (BAIR) Lab, Co-Founder of covariant.ai [2017- ], Co-Founder of Gradescope [2014- ], Advisor to OpenAI, Founding Faculty Partner AI@TheHouse venture fund, Advisor to many AI/Robotics start-ups. He works in machine learning and robotics. In particular his research focuses on making robots learn from people (apprenticeship learning), how to make robots learn through their own trial and error (reinforcement learning), and how to speed up skill acquisition through learning-to-learn (meta-learning). His robots have learned advanced helicopter aerobatics, knot-tying, basic assembly, organizing laundry, locomotion, and vision-based robotic manipulation. He has won numerous awards, including best paper awards at ICML, NIPS and ICRA, early career awards from NSF, Darpa, ONR, AFOSR, Sloan, TR35, IEEE, and the Presidential Early Career Award for Scientists and Engineers (PECASE). Pieter's work is frequently featured in the popular press, including New York Times, BBC, Bloomberg, Wall Street Journal, Wired, Forbes, Tech Review, NPR.
Sergey Levine (UC Berkeley)

Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in fall 2016. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms. Applications of his work include autonomous robots and vehicles, as well as applications in other decision-making domains. His research includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Poster: Meta-Reinforcement Learning of Structured Exploration Strategies »
Wed. Dec 5th through Thu the 6th Room Room 517 AB #134
More from the Same Authors
-
2021 : B-Pref: Benchmarking Preference-Based Reinforcement Learning »
Kimin Lee · Laura Smith · Anca Dragan · Pieter Abbeel -
2021 Spotlight: Robust Predictable Control »
Ben Eysenbach · Russ Salakhutdinov · Sergey Levine -
2021 Spotlight: Behavior From the Void: Unsupervised Active Pre-Training »
Hao Liu · Pieter Abbeel -
2021 Spotlight: Offline Reinforcement Learning as One Big Sequence Modeling Problem »
Michael Janner · Qiyang Li · Sergey Levine -
2021 Spotlight: Pragmatic Image Compression for Human-in-the-Loop Decision-Making »
Sid Reddy · Anca Dragan · Sergey Levine -
2021 : An Empirical Investigation of Representation Learning for Imitation »
Cynthia Chen · Sam Toyer · Cody Wild · Scott Emmons · Ian Fischer · Kuang-Huei Lee · Neel Alex · Steven Wang · Ping Luo · Stuart Russell · Pieter Abbeel · Rohin Shah -
2021 : URLB: Unsupervised Reinforcement Learning Benchmark »
Misha Laskin · Denis Yarats · Hao Liu · Kimin Lee · Albert Zhan · Kevin Lu · Catherine Cang · Lerrel Pinto · Pieter Abbeel -
2021 : Bridge Data: Boosting Generalization of Robotic Skills with Cross-Domain Datasets »
Frederik Ebert · Yanlai Yang · Karl Schmeckpeper · Bernadette Bucher · Kostas Daniilidis · Chelsea Finn · Sergey Levine -
2021 : Hybrid Imitative Planning with Geometric and Predictive Costs in Offroad Environments »
Dhruv Shah · Daniel Shin · Nick Rhinehart · Ali Agha · David D Fan · Sergey Levine -
2021 : Extending the WILDS Benchmark for Unsupervised Adaptation »
Shiori Sagawa · Pang Wei Koh · Tony Lee · Irena Gao · Sang Michael Xie · Kendrick Shen · Ananya Kumar · Weihua Hu · Michihiro Yasunaga · Henrik Marklund · Sara Beery · Ian Stavness · Jure Leskovec · Kate Saenko · Tatsunori Hashimoto · Sergey Levine · Chelsea Finn · Percy Liang -
2021 : Test Time Robustification of Deep Models via Adaptation and Augmentation »
Marvin Zhang · Sergey Levine · Chelsea Finn -
2021 : Temporal-Difference Value Estimation via Uncertainty-Guided Soft Updates »
Litian Liang · Yaosheng Xu · Stephen McAleer · Dailin Hu · Alexander Ihler · Pieter Abbeel · Roy Fox -
2021 : Target Entropy Annealing for Discrete Soft Actor-Critic »
Yaosheng Xu · Dailin Hu · Litian Liang · Stephen McAleer · Pieter Abbeel · Roy Fox -
2021 : Value Function Spaces: Skill-Centric State Abstractions for Long-Horizon Reasoning »
Dhruv Shah · Ted Xiao · Alexander Toshev · Sergey Levine · brian ichter -
2021 : Count-Based Temperature Scheduling for Maximum Entropy Reinforcement Learning »
Dailin Hu · Pieter Abbeel · Roy Fox -
2021 : Data Sharing without Rewards in Multi-Task Offline Reinforcement Learning »
Tianhe Yu · Aviral Kumar · Yevgen Chebotar · Chelsea Finn · Sergey Levine · Karol Hausman -
2021 : Should I Run Offline Reinforcement Learning or Behavioral Cloning? »
Aviral Kumar · Joey Hong · Anikait Singh · Sergey Levine -
2021 : DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization »
Aviral Kumar · Rishabh Agarwal · Tengyu Ma · Aaron Courville · George Tucker · Sergey Levine -
2021 : Reward Uncertainty for Exploration in Preference-based Reinforcement Learning »
Xinran Liang · Katherine Shu · Kimin Lee · Pieter Abbeel -
2021 : CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery »
Misha Laskin · Hao Liu · Xue Bin Peng · Denis Yarats · Aravind Rajeswaran · Pieter Abbeel -
2021 : SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning »
Jongjin Park · Younggyo Seo · Jinwoo Shin · Honglak Lee · Pieter Abbeel · Kimin Lee -
2021 : A Framework for Efficient Robotic Manipulation »
Albert Zhan · Ruihan Zhao · Lerrel Pinto · Pieter Abbeel · Misha Laskin -
2021 : URLB: Unsupervised Reinforcement Learning Benchmark »
Misha Laskin · Denis Yarats · Hao Liu · Kimin Lee · Albert Zhan · Kevin Lu · Catherine Cang · Lerrel Pinto · Pieter Abbeel -
2021 : Offline Reinforcement Learning with In-sample Q-Learning »
Ilya Kostrikov · Ashvin Nair · Sergey Levine -
2021 : C-Planning: An Automatic Curriculum for Learning Goal-Reaching Tasks »
Tianjun Zhang · Ben Eysenbach · Russ Salakhutdinov · Sergey Levine · Joseph Gonzalez -
2021 : Skill Preferences: Learning to Extract and Execute Robotic Skills from Human Feedback »
Xiaofei Wang · Kimin Lee · Kourosh Hakhamaneshi · Pieter Abbeel · Misha Laskin -
2021 : The Information Geometry of Unsupervised Reinforcement Learning »
Ben Eysenbach · Russ Salakhutdinov · Sergey Levine -
2021 : Mismatched No More: Joint Model-Policy Optimization for Model-Based RL »
Ben Eysenbach · Alexander Khazatsky · Sergey Levine · Russ Salakhutdinov -
2021 : Offline Meta-Reinforcement Learning with Online Self-Supervision »
Vitchyr Pong · Ashvin Nair · Laura Smith · Catherine Huang · Sergey Levine -
2021 : Hybrid Imitative Planning with Geometric and Predictive Costs in Offroad Environments »
Daniel Shin · Dhruv Shah · Ali Agha · Nicholas Rhinehart · Sergey Levine -
2021 : CoMPS: Continual Meta Policy Search »
Glen Berseth · Zhiwei Zhang · Grace Zhang · Chelsea Finn · Sergey Levine -
2021 : Behavioral Priors and Dynamics Models: Improving Performance and Domain Transfer in Offline RL »
Catherine Cang · Aravind Rajeswaran · Pieter Abbeel · Misha Laskin -
2021 : Hierarchical Few-Shot Imitation with Skill Transition Models »
Kourosh Hakhamaneshi · Ruihan Zhao · Albert Zhan · Pieter Abbeel · Misha Laskin -
2021 : Offline Reinforcement Learning with Implicit Q-Learning »
Ilya Kostrikov · Ashvin Nair · Sergey Levine -
2021 : Pretraining for Language-Conditioned Imitation with Transformers »
Aaron Putterman · Kevin Lu · Igor Mordatch · Pieter Abbeel -
2021 : TRAIL: Near-Optimal Imitation Learning with Suboptimal Data »
Mengjiao (Sherry) Yang · Sergey Levine · Ofir Nachum -
2022 : You Only Live Once: Single-Life Reinforcement Learning »
Annie Chen · Archit Sharma · Sergey Levine · Chelsea Finn -
2022 : Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement »
Michael Chang · Alyssa L Dayan · Franziska Meier · Tom Griffiths · Sergey Levine · Amy Zhang -
2022 : Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement »
Michael Chang · Alyssa L Dayan · Franziska Meier · Tom Griffiths · Sergey Levine · Amy Zhang -
2022 : Quantifying Uncertainty in Foundation Models via Ensembles »
Meiqi Sun · Wilson Yan · Pieter Abbeel · Igor Mordatch -
2022 : Offline Q-learning on Diverse Multi-Task Data Both Scales And Generalizes »
Aviral Kumar · Rishabh Agarwal · XINYANG GENG · George Tucker · Sergey Levine -
2022 : Pre-Training for Robots: Leveraging Diverse Multitask Data via Offline Reinforcement Learning »
Aviral Kumar · Anikait Singh · Frederik Ebert · Yanlai Yang · Chelsea Finn · Sergey Levine -
2022 : Offline Reinforcement Learning from Heteroskedastic Data Via Support Constraints »
Anikait Singh · Aviral Kumar · Quan Vuong · Yevgen Chebotar · Sergey Levine -
2022 : Multi-Environment Pretraining Enables Transfer to Action Limited Datasets »
David Venuto · Mengjiao (Sherry) Yang · Pieter Abbeel · Doina Precup · Igor Mordatch · Ofir Nachum -
2022 : Skill Acquisition by Instruction Augmentation on Offline Datasets »
Ted Xiao · Harris Chan · Pierre Sermanet · Ayzaan Wahid · Anthony Brohan · Karol Hausman · Sergey Levine · Jonathan Tompson -
2022 : PnP-Nav: Plug-and-Play Policies for Generalizable Visual Navigation Across Robots »
Dhruv Shah · Ajay Sridhar · Arjun Bhorkar · Noriaki Hirose · Sergey Levine -
2022 : Bitrate-Constrained DRO: Beyond Worst Case Robustness To Unknown Group Shifts »
Amrith Setlur · Don Dennis · Benjamin Eysenbach · Aditi Raghunathan · Chelsea Finn · Virginia Smith · Sergey Levine -
2022 : Train Offline, Test Online: A Real Robot Learning Benchmark »
Gaoyue Zhou · Victoria Dean · Mohan Kumar Srirama · Aravind Rajeswaran · Jyothish Pari · Kyle Hatch · Aryan Jain · Tianhe Yu · Pieter Abbeel · Lerrel Pinto · Chelsea Finn · Abhinav Gupta -
2022 : Learning to Extrapolate: A Transductive Approach »
Aviv Netanyahu · Abhishek Gupta · Max Simchowitz · Kaiqing Zhang · Pulkit Agrawal -
2022 : Confidence-Conditioned Value Functions for Offline Reinforcement Learning »
Joey Hong · Aviral Kumar · Sergey Levine -
2022 : Efficient Deep Reinforcement Learning Requires Regulating Statistical Overfitting »
Qiyang Li · Aviral Kumar · Ilya Kostrikov · Sergey Levine -
2022 : Contrastive Example-Based Control »
Kyle Hatch · Sarthak J Shetty · Benjamin Eysenbach · Tianhe Yu · Rafael Rafailov · Russ Salakhutdinov · Sergey Levine · Chelsea Finn -
2022 : Train Offline, Test Online: A Real Robot Learning Benchmark »
Gaoyue Zhou · Victoria Dean · Mohan Kumar Srirama · Aravind Rajeswaran · Jyothish Pari · Kyle Hatch · Aryan Jain · Tianhe Yu · Pieter Abbeel · Lerrel Pinto · Chelsea Finn · Abhinav Gupta -
2022 : Offline Reinforcement Learning for Customizable Visual Navigation »
Dhruv Shah · Arjun Bhorkar · Hrishit Leen · Ilya Kostrikov · Nicholas Rhinehart · Sergey Levine -
2022 : A Connection between One-Step Regularization and Critic Regularization in Reinforcement Learning »
Benjamin Eysenbach · Matthieu Geist · Sergey Levine · Russ Salakhutdinov -
2022 : Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement »
Michael Chang · Alyssa L Dayan · Franziska Meier · Tom Griffiths · Sergey Levine · Amy Zhang -
2022 : Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement »
Michael Chang · Alyssa L Dayan · Franziska Meier · Tom Griffiths · Sergey Levine · Amy Zhang -
2022 : Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement »
Michael Chang · Alyssa L Dayan · Franziska Meier · Tom Griffiths · Sergey Levine · Amy Zhang -
2022 : Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement »
Michael Chang · Alyssa L Dayan · Franziska Meier · Tom Griffiths · Sergey Levine · Amy Zhang -
2022 : CLUTR: Curriculum Learning via Unsupervised Task Representation Learning »
Abdus Salam Azad · Izzeddin Gur · Aleksandra Faust · Pieter Abbeel · Ion Stoica -
2022 : Train Offline, Test Online: A Real Robot Learning Benchmark »
Gaoyue Zhou · Victoria Dean · Mohan Kumar Srirama · Aravind Rajeswaran · Jyothish Pari · Kyle Hatch · Aryan Jain · Tianhe Yu · Pieter Abbeel · Lerrel Pinto · Chelsea Finn · Abhinav Gupta -
2022 : Confidence-Conditioned Value Functions for Offline Reinforcement Learning »
Joey Hong · Aviral Kumar · Sergey Levine -
2022 : Efficient Deep Reinforcement Learning Requires Regulating Statistical Overfitting »
Qiyang Li · Aviral Kumar · Ilya Kostrikov · Sergey Levine -
2022 : Pre-Training for Robots: Leveraging Diverse Multitask Data via Offline Reinforcement Learning »
Anikait Singh · Aviral Kumar · Frederik Ebert · Yanlai Yang · Chelsea Finn · Sergey Levine -
2022 : Offline Reinforcement Learning from Heteroskedastic Data Via Support Constraints »
Anikait Singh · Aviral Kumar · Quan Vuong · Yevgen Chebotar · Sergey Levine -
2022 : Adversarial Policies Beat Professional-Level Go AIs »
Tony Wang · Adam Gleave · Nora Belrose · Tom Tseng · Michael Dennis · Yawen Duan · Viktor Pogrebniak · Joseph Miller · Sergey Levine · Stuart J Russell -
2022 : Contrastive Example-Based Control »
Kyle Hatch · Sarthak J Shetty · Benjamin Eysenbach · Tianhe Yu · Rafael Rafailov · Russ Salakhutdinov · Sergey Levine · Chelsea Finn -
2022 : PnP-Nav: Plug-and-Play Policies for Generalizable Visual Navigation Across Robots »
Dhruv Shah · Ajay Sridhar · Arjun Bhorkar · Noriaki Hirose · Sergey Levine -
2022 : Offline Reinforcement Learning for Customizable Visual Navigation »
Dhruv Shah · Arjun Bhorkar · Hrishit Leen · Ilya Kostrikov · Nicholas Rhinehart · Sergey Levine -
2022 : Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement »
Michael Chang · Alyssa L Dayan · Franziska Meier · Tom Griffiths · Sergey Levine · Amy Zhang -
2022 : A Connection between One-Step Regularization and Critic Regularization in Reinforcement Learning »
Benjamin Eysenbach · Matthieu Geist · Russ Salakhutdinov · Sergey Levine -
2022 : Simplifying Model-based RL: Learning Representations, Latent-space Models, and Policies with One Objective »
Raj Ghugare · Homanga Bharadhwaj · Benjamin Eysenbach · Sergey Levine · Ruslan Salakhutdinov -
2022 : Adversarial Policies Beat Professional-Level Go AIs »
Tony Wang · Adam Gleave · Nora Belrose · Tom Tseng · Michael Dennis · Yawen Duan · Viktor Pogrebniak · Joseph Miller · Sergey Levine · Stuart Russell -
2023 Poster: ReDS: Offline RL With Heteroskedastic Datasets via Support Constraints »
Anikait Singh · Aviral Kumar · Quan Vuong · Yevgen Chebotar · Sergey Levine -
2023 Poster: Ignorance is Bliss: Robust Control via Information Gating »
Manan Tomar · Riashat Islam · Matthew Taylor · Sergey Levine · Philip Bachman -
2023 Poster: Language Quantized AutoEncoders for Data Efficient Text-Image Alignment »
Hao Liu · Wilson Yan · Pieter Abbeel -
2023 Poster: Learning to Influence Human Behavior with Offline Reinforcement Learning »
Joey Hong · Sergey Levine · Anca Dragan -
2023 Poster: Offline Goal-Conditioned RL with Latent States as Actions »
Seohong Park · Dibya Ghosh · Benjamin Eysenbach · Sergey Levine -
2023 Poster: Learning Universal Policies via Text-Guided Video Generation »
Yilun Du · Mengjiao (Sherry) Yang · Bo Dai · Hanjun Dai · Ofir Nachum · Josh Tenenbaum · Dale Schuurmans · Pieter Abbeel -
2023 Poster: Addressing Out-Of-Distribution Joint Actions in Offline Multi-Agent RL via Alternating Stationary Distribution Correction Estimation »
Daiki E Matsunaga · Jongmin Lee · Jaeseok Yoon · Stefanos Leonardos · Pieter Abbeel · Kee-Eung Kim -
2023 Poster: Blockwise Parallel Transformer for Large Models »
Hao Liu · Pieter Abbeel -
2023 Poster: Grounded Decoding: Guiding Text Generation with Grounded Models for Robot Control »
Wenlong Huang · Fei Xia · Dhruv Shah · Danny Driess · Andy Zeng · Yao Lu · Pete Florence · Igor Mordatch · Sergey Levine · Karol Hausman · brian ichter -
2023 Poster: Accelerating Exploration with Unlabeled Prior Data »
Qiyang Li · Jason Zhang · Dibya Ghosh · Amy Zhang · Sergey Levine -
2023 Poster: Video Prediction Models as Rewards for Reinforcement Learning »
Alejandro Escontrela · Ademi Adeniji · Wilson Yan · Ajay Jain · Xue Bin Peng · Ken Goldberg · Youngwoon Lee · Danijar Hafner · Pieter Abbeel -
2023 Poster: Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning »
Mitsuhiko Nakamoto · Yuexiang Zhai · Anikait Singh · Max Sobol Mark · Yi Ma · Chelsea Finn · Aviral Kumar · Sergey Levine -
2023 Poster: Accelerating Reinforcement Learning with Value-Conditional State Entropy Exploration »
Dongyoung Kim · Jinwoo Shin · Pieter Abbeel · Younggyo Seo -
2023 Poster: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models »
Ying Fan · Olivia Watkins · Yuqing Du · Hao Liu · Moonkyung Ryu · Craig Boutilier · Pieter Abbeel · Mohammad Ghavamzadeh · Kangwook Lee · Kimin Lee -
2023 Poster: Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence? »
Arjun Majumdar · Karmesh Yadav · Sergio Arnaud · Jason Yecheng Ma · Claire Chen · Sneha Silwal · Aryan Jain · Vincent-Pierre Berges · Tingfan Wu · Jay Vakil · Pieter Abbeel · Jitendra Malik · Dhruv Batra · Yixin Lin · Oleksandr Maksymets · Aravind Rajeswaran · Franziska Meier -
2022 : Offline Q-learning on Diverse Multi-Task Data Both Scales And Generalizes »
Aviral Kumar · Rishabh Agarwal · XINYANG GENG · George Tucker · Sergey Levine -
2022 : Train Offline, Test Online: A Real Robot Learning Benchmark »
Gaoyue Zhou · Victoria Dean · Mohan Kumar Srirama · Aravind Rajeswaran · Jyothish Pari · Kyle Hatch · Aryan Jain · Tianhe Yu · Pieter Abbeel · Lerrel Pinto · Chelsea Finn · Abhinav Gupta -
2022 : Train Offline, Test Online: A Real Robot Learning Benchmark »
Gaoyue Zhou · Victoria Dean · Mohan Kumar Srirama · Aravind Rajeswaran · Jyothish Pari · Kyle Hatch · Aryan Jain · Tianhe Yu · Pieter Abbeel · Lerrel Pinto · Chelsea Finn · Abhinav Gupta -
2022 : Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement »
Michael Chang · Alyssa L Dayan · Franziska Meier · Tom Griffiths · Sergey Levine · Amy Zhang -
2022 Poster: On the Effectiveness of Fine-tuning Versus Meta-reinforcement Learning »
Mandi Zhao · Pieter Abbeel · Stephen James -
2022 Poster: MEMO: Test Time Robustness via Adaptation and Augmentation »
Marvin Zhang · Sergey Levine · Chelsea Finn -
2022 Poster: Chain of Thought Imitation with Procedure Cloning »
Mengjiao (Sherry) Yang · Dale Schuurmans · Pieter Abbeel · Ofir Nachum -
2022 Poster: First Contact: Unsupervised Human-Machine Co-Adaptation via Mutual Information Maximization »
Siddharth Reddy · Sergey Levine · Anca Dragan -
2022 Poster: DASCO: Dual-Generator Adversarial Support Constrained Offline Reinforcement Learning »
Quan Vuong · Aviral Kumar · Sergey Levine · Yevgen Chebotar -
2022 Poster: Masked Autoencoding for Scalable and Generalizable Decision Making »
Fangchen Liu · Hao Liu · Aditya Grover · Pieter Abbeel -
2022 Poster: Adversarial Unlearning: Reducing Confidence Along Adversarial Directions »
Amrith Setlur · Benjamin Eysenbach · Virginia Smith · Sergey Levine -
2022 Poster: Mismatched No More: Joint Model-Policy Optimization for Model-Based RL »
Benjamin Eysenbach · Alexander Khazatsky · Sergey Levine · Russ Salakhutdinov -
2022 Poster: Unpacking Reward Shaping: Understanding the Benefits of Reward Engineering on Sample Complexity »
Abhishek Gupta · Aldo Pacchiano · Yuexiang Zhai · Sham Kakade · Sergey Levine -
2022 Poster: Distributionally Adaptive Meta Reinforcement Learning »
Anurag Ajay · Abhishek Gupta · Dibya Ghosh · Sergey Levine · Pulkit Agrawal -
2022 Poster: You Only Live Once: Single-Life Reinforcement Learning »
Annie Chen · Archit Sharma · Sergey Levine · Chelsea Finn -
2022 Poster: Unsupervised Reinforcement Learning with Contrastive Intrinsic Control »
Michael Laskin · Hao Liu · Xue Bin Peng · Denis Yarats · Aravind Rajeswaran · Pieter Abbeel -
2022 Poster: Object Representations as Fixed Points: Training Iterative Refinement Algorithms with Implicit Differentiation »
Michael Chang · Tom Griffiths · Sergey Levine -
2022 Poster: Data-Driven Offline Decision-Making via Invariant Representation Learning »
Han Qi · Yi Su · Aviral Kumar · Sergey Levine -
2022 Poster: Contrastive Learning as Goal-Conditioned Reinforcement Learning »
Benjamin Eysenbach · Tianjun Zhang · Sergey Levine · Russ Salakhutdinov -
2022 Poster: Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions »
Weirui Ye · Pieter Abbeel · Yang Gao -
2022 Poster: Deep Hierarchical Planning from Pixels »
Danijar Hafner · Kuang-Huei Lee · Ian Fischer · Pieter Abbeel -
2022 Poster: Imitating Past Successes can be Very Suboptimal »
Benjamin Eysenbach · Soumith Udatha · Russ Salakhutdinov · Sergey Levine -
2021 : Retrospective Panel »
Sergey Levine · Nando de Freitas · Emma Brunskill · Finale Doshi-Velez · Nan Jiang · Rishabh Agarwal -
2021 : Playful Interactions for Representation Learning »
Sarah Young · Pieter Abbeel · Lerrel Pinto -
2021 Workshop: Ecological Theory of Reinforcement Learning: How Does Task Design Influence Agent Learning? »
Manfred Díaz · Hiroki Furuta · Elise van der Pol · Lisa Lee · Shixiang (Shane) Gu · Pablo Samuel Castro · Simon Du · Marc Bellemare · Sergey Levine -
2021 : Data-Driven Offline Optimization for Architecting Hardware Accelerators »
Aviral Kumar · Amir Yazdanbakhsh · Milad Hashemi · Kevin Swersky · Sergey Levine -
2021 : Sergey Levine Talk Q&A »
Sergey Levine -
2021 : Opinion Contributed Talk: Sergey Levine »
Sergey Levine -
2021 : Offline Meta-Reinforcement Learning with Online Self-Supervision Q&A »
Vitchyr Pong · Ashvin Nair · Laura Smith · Catherine Huang · Sergey Levine -
2021 : Offline Meta-Reinforcement Learning with Online Self-Supervision »
Vitchyr Pong · Ashvin Nair · Laura Smith · Catherine Huang · Sergey Levine -
2021 : Offline Meta-Reinforcement Learning with Online Self-Supervision »
Vitchyr Pong · Ashvin Nair · Laura Smith · Catherine Huang · Sergey Levine -
2021 : DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization Q&A »
Aviral Kumar · Rishabh Agarwal · Tengyu Ma · Aaron Courville · George Tucker · Sergey Levine -
2021 : DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization »
Aviral Kumar · Rishabh Agarwal · Tengyu Ma · Aaron Courville · George Tucker · Sergey Levine -
2021 Workshop: Distribution shifts: connecting methods and applications (DistShift) »
Shiori Sagawa · Pang Wei Koh · Fanny Yang · Hongseok Namkoong · Jiashi Feng · Kate Saenko · Percy Liang · Sarah Bird · Sergey Levine -
2021 Workshop: Deep Reinforcement Learning »
Pieter Abbeel · Chelsea Finn · David Silver · Matthew Taylor · Martha White · Srijita Das · Yuqing Du · Andrew Patterson · Manan Tomar · Olivia Watkins -
2021 Oral: Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification »
Ben Eysenbach · Sergey Levine · Russ Salakhutdinov -
2021 Poster: Hindsight Task Relabelling: Experience Replay for Sparse Reward Meta-RL »
Charles Packer · Pieter Abbeel · Joseph Gonzalez -
2021 Poster: Improving Computational Efficiency in Visual Reinforcement Learning via Stored Embeddings »
Lili Chen · Kimin Lee · Aravind Srinivas · Pieter Abbeel -
2021 Poster: Robust Predictable Control »
Ben Eysenbach · Russ Salakhutdinov · Sergey Levine -
2021 Poster: Which Mutual-Information Representation Learning Objectives are Sufficient for Control? »
Kate Rakelly · Abhishek Gupta · Carlos Florensa · Sergey Levine -
2021 Poster: COMBO: Conservative Offline Model-Based Policy Optimization »
Tianhe Yu · Aviral Kumar · Rafael Rafailov · Aravind Rajeswaran · Sergey Levine · Chelsea Finn -
2021 : BASALT: A MineRL Competition on Solving Human-Judged Task + Q&A »
Rohin Shah · Cody Wild · Steven Wang · Neel Alex · Brandon Houghton · William Guss · Sharada Mohanty · Stephanie Milani · Nicholay Topin · Pieter Abbeel · Stuart Russell · Anca Dragan -
2021 Poster: Outcome-Driven Reinforcement Learning via Variational Inference »
Tim G. J. Rudner · Vitchyr Pong · Rowan McAllister · Yarin Gal · Sergey Levine -
2021 Poster: Decision Transformer: Reinforcement Learning via Sequence Modeling »
Lili Chen · Kevin Lu · Aravind Rajeswaran · Kimin Lee · Aditya Grover · Misha Laskin · Pieter Abbeel · Aravind Srinivas · Igor Mordatch -
2021 Poster: Bayesian Adaptation for Covariate Shift »
Aurick Zhou · Sergey Levine -
2021 Poster: Offline Reinforcement Learning as One Big Sequence Modeling Problem »
Michael Janner · Qiyang Li · Sergey Levine -
2021 Poster: Pragmatic Image Compression for Human-in-the-Loop Decision-Making »
Sid Reddy · Anca Dragan · Sergey Levine -
2021 Poster: Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification »
Ben Eysenbach · Sergey Levine · Russ Salakhutdinov -
2021 Poster: Mastering Atari Games with Limited Data »
Weirui Ye · Shaohuai Liu · Thanard Kurutach · Pieter Abbeel · Yang Gao -
2021 Poster: Information is Power: Intrinsic Control via Information Capture »
Nicholas Rhinehart · Jenny Wang · Glen Berseth · John Co-Reyes · Danijar Hafner · Chelsea Finn · Sergey Levine -
2021 Poster: Conservative Data Sharing for Multi-Task Offline Reinforcement Learning »
Tianhe Yu · Aviral Kumar · Yevgen Chebotar · Karol Hausman · Sergey Levine · Chelsea Finn -
2021 Poster: Reinforcement Learning with Latent Flow »
Wenling Shang · Xiaofei Wang · Aravind Srinivas · Aravind Rajeswaran · Yang Gao · Pieter Abbeel · Misha Laskin -
2021 Poster: Why Generalization in RL is Difficult: Epistemic POMDPs and Implicit Partial Observability »
Dibya Ghosh · Jad Rahme · Aviral Kumar · Amy Zhang · Ryan Adams · Sergey Levine -
2021 Poster: Behavior From the Void: Unsupervised Active Pre-Training »
Hao Liu · Pieter Abbeel -
2021 Poster: Teachable Reinforcement Learning via Advice Distillation »
Olivia Watkins · Abhishek Gupta · Trevor Darrell · Pieter Abbeel · Jacob Andreas -
2021 Poster: Autonomous Reinforcement Learning via Subgoal Curricula »
Archit Sharma · Abhishek Gupta · Sergey Levine · Karol Hausman · Chelsea Finn -
2021 Poster: Adaptive Risk Minimization: Learning to Adapt to Domain Shift »
Marvin Zhang · Henrik Marklund · Nikita Dhawan · Abhishek Gupta · Sergey Levine · Chelsea Finn -
2020 : Design-Bench: Benchmarks for Data-Driven Offline Model-Based Optimization »
Brandon Trabucco · Aviral Kumar · XINYANG GENG · Sergey Levine -
2020 : Conservative Objective Models: A Simple Approach to Effective Model-Based Optimization »
Brandon Trabucco · Aviral Kumar · XINYANG GENG · Sergey Levine -
2020 : Panel »
Emma Brunskill · Nan Jiang · Nando de Freitas · Finale Doshi-Velez · Sergey Levine · John Langford · Lihong Li · George Tucker · Rishabh Agarwal · Aviral Kumar -
2020 : Panel discussion »
Pierre-Yves Oudeyer · Marc Bellemare · Peter Stone · Matt Botvinick · Susan Murphy · Anusha Nagabandi · Ashley Edwards · Karen Liu · Pieter Abbeel -
2020 : Contributed Talk: Reset-Free Lifelong Learning with Skill-Space Planning »
Kevin Lu · Aditya Grover · Pieter Abbeel · Igor Mordatch -
2020 : Contributed Talk: MaxEnt RL and Robust Control »
Benjamin Eysenbach · Sergey Levine -
2020 Workshop: Deep Reinforcement Learning »
Pieter Abbeel · Chelsea Finn · Joelle Pineau · David Silver · Satinder Singh · Coline Devin · Misha Laskin · Kimin Lee · Janarthanan Rajendran · Vivek Veeriah -
2020 Poster: Model Inversion Networks for Model-Based Optimization »
Aviral Kumar · Sergey Levine -
2020 Poster: Denoising Diffusion Probabilistic Models »
Jonathan Ho · Ajay Jain · Pieter Abbeel -
2020 Poster: Automatic Curriculum Learning through Value Disagreement »
Yunzhi Zhang · Pieter Abbeel · Lerrel Pinto -
2020 Poster: Continual Learning of Control Primitives : Skill Discovery via Reset-Games »
Kelvin Xu · Siddharth Verma · Chelsea Finn · Sergey Levine -
2020 Poster: Gradient Surgery for Multi-Task Learning »
Tianhe Yu · Saurabh Kumar · Abhishek Gupta · Sergey Levine · Karol Hausman · Chelsea Finn -
2020 Poster: AvE: Assistance via Empowerment »
Yuqing Du · Stas Tiomkin · Emre Kiciman · Daniel Polani · Pieter Abbeel · Anca Dragan -
2020 Poster: Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement »
Benjamin Eysenbach · XINYANG GENG · Sergey Levine · Russ Salakhutdinov -
2020 Poster: Conservative Q-Learning for Offline Reinforcement Learning »
Aviral Kumar · Aurick Zhou · George Tucker · Sergey Levine -
2020 Poster: Reinforcement Learning with Augmented Data »
Misha Laskin · Kimin Lee · Adam Stooke · Lerrel Pinto · Pieter Abbeel · Aravind Srinivas -
2020 Poster: Generalized Hindsight for Reinforcement Learning »
Alexander Li · Lerrel Pinto · Pieter Abbeel -
2020 Poster: Trajectory-wise Multiple Choice Learning for Dynamics Generalization in Reinforcement Learning »
Younggyo Seo · Kimin Lee · Ignasi Clavera Gilaberte · Thanard Kurutach · Jinwoo Shin · Pieter Abbeel -
2020 Spotlight: Reinforcement Learning with Augmented Data »
Misha Laskin · Kimin Lee · Adam Stooke · Lerrel Pinto · Pieter Abbeel · Aravind Srinivas -
2020 Oral: Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement »
Benjamin Eysenbach · XINYANG GENG · Sergey Levine · Russ Salakhutdinov -
2020 Tutorial: (Track3) Offline Reinforcement Learning: From Algorithm Design to Practical Applications Q&A »
Sergey Levine · Aviral Kumar -
2020 Poster: Gamma-Models: Generative Temporal Difference Learning for Infinite-Horizon Prediction »
Michael Janner · Igor Mordatch · Sergey Levine -
2020 Poster: Sparse Graphical Memory for Robust Planning »
Scott Emmons · Ajay Jain · Misha Laskin · Thanard Kurutach · Pieter Abbeel · Deepak Pathak -
2020 Poster: One Solution is Not All You Need: Few-Shot Extrapolation via Structured MaxEnt RL »
Saurabh Kumar · Aviral Kumar · Sergey Levine · Chelsea Finn -
2020 Poster: Long-Horizon Visual Planning with Goal-Conditioned Hierarchical Predictors »
Karl Pertsch · Oleh Rybkin · Frederik Ebert · Shenghao Zhou · Dinesh Jayaraman · Chelsea Finn · Sergey Levine -
2020 Poster: Stochastic Latent Actor-Critic: Deep Reinforcement Learning with a Latent Variable Model »
Alex X. Lee · Anusha Nagabandi · Pieter Abbeel · Sergey Levine -
2020 Poster: Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design »
Michael Dennis · Natasha Jaques · Eugene Vinitsky · Alexandre Bayen · Stuart Russell · Andrew Critch · Sergey Levine -
2020 Poster: MOPO: Model-based Offline Policy Optimization »
Tianhe Yu · Garrett Thomas · Lantao Yu · Stefano Ermon · James Zou · Sergey Levine · Chelsea Finn · Tengyu Ma -
2020 Poster: DisCor: Corrective Feedback in Reinforcement Learning via Distribution Correction »
Aviral Kumar · Abhishek Gupta · Sergey Levine -
2020 Spotlight: DisCor: Corrective Feedback in Reinforcement Learning via Distribution Correction »
Aviral Kumar · Abhishek Gupta · Sergey Levine -
2020 Oral: Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design »
Michael Dennis · Natasha Jaques · Eugene Vinitsky · Alexandre Bayen · Stuart Russell · Andrew Critch · Sergey Levine -
2020 Tutorial: (Track3) Offline Reinforcement Learning: From Algorithm Design to Practical Applications »
Sergey Levine · Aviral Kumar -
2019 : Poster and Coffee Break 2 »
Karol Hausman · Kefan Dong · Ken Goldberg · Lihong Li · Lin Yang · Lingxiao Wang · Lior Shani · Liwei Wang · Loren Amdahl-Culleton · Lucas Cassano · Marc Dymetman · Marc Bellemare · Marcin Tomczak · Margarita Castro · Marius Kloft · Marius-Constantin Dinu · Markus Holzleitner · Martha White · Mengdi Wang · Michael Jordan · Mihailo Jovanovic · Ming Yu · Minshuo Chen · Moonkyung Ryu · Muhammad Zaheer · Naman Agarwal · Nan Jiang · Niao He · Nikolaus Yasui · Nikos Karampatziakis · Nino Vieillard · Ofir Nachum · Olivier Pietquin · Ozan Sener · Pan Xu · Parameswaran Kamalaruban · Paul Mineiro · Paul Rolland · Philip Amortila · Pierre-Luc Bacon · Prakash Panangaden · Qi Cai · Qiang Liu · Quanquan Gu · Raihan Seraj · Richard Sutton · Rick Valenzano · Robert Dadashi · Rodrigo Toro Icarte · Roshan Shariff · Roy Fox · Ruosong Wang · Saeed Ghadimi · Samuel Sokota · Sean Sinclair · Sepp Hochreiter · Sergey Levine · Sergio Valcarcel Macua · Sham Kakade · Shangtong Zhang · Sheila McIlraith · Shie Mannor · Shimon Whiteson · Shuai Li · Shuang Qiu · Wai Lok Li · Siddhartha Banerjee · Sitao Luan · Tamer Basar · Thinh Doan · Tianhe Yu · Tianyi Liu · Tom Zahavy · Toryn Klassen · Tuo Zhao · Vicenç Gómez · Vincent Liu · Volkan Cevher · Wesley Suttle · Xiao-Wen Chang · Xiaohan Wei · Xiaotong Liu · Xingguo Li · Xinyi Chen · Xingyou Song · Yao Liu · YiDing Jiang · Yihao Feng · Yilun Du · Yinlam Chow · Yinyu Ye · Yishay Mansour · · Yonathan Efroni · Yongxin Chen · Yuanhao Wang · Bo Dai · Chen-Yu Wei · Harsh Shrivastava · Hongyang Zhang · Qinqing Zheng · SIDDHARTHA SATPATHI · Xueqing Liu · Andreu Vall -
2019 : Poster Presentations »
Rahul Mehta · Andrew Lampinen · Binghong Chen · Sergio Pascual-Diaz · Jordi Grau-Moya · Aldo Faisal · Jonathan Tompson · Yiren Lu · Khimya Khetarpal · Martin Klissarov · Pierre-Luc Bacon · Doina Precup · Thanard Kurutach · Aviv Tamar · Pieter Abbeel · Jinke He · Maximilian Igl · Shimon Whiteson · Wendelin Boehmer · Raphaël Marinier · Olivier Pietquin · Karol Hausman · Sergey Levine · Chelsea Finn · Tianhe Yu · Lisa Lee · Benjamin Eysenbach · Emilio Parisotto · Eric Xing · Ruslan Salakhutdinov · Hongyu Ren · Anima Anandkumar · Deepak Pathak · Christopher Lu · Trevor Darrell · Alexei Efros · Phillip Isola · Feng Liu · Bo Han · Gang Niu · Masashi Sugiyama · Saurabh Kumar · Janith Petangoda · Johan Ferret · James McClelland · Kara Liu · Animesh Garg · Robert Lange -
2019 : Poster Session »
Matthia Sabatelli · Adam Stooke · Amir Abdi · Paulo Rauber · Leonard Adolphs · Ian Osband · Hardik Meisheri · Karol Kurach · Johannes Ackermann · Matt Benatan · GUO ZHANG · Chen Tessler · Dinghan Shen · Mikayel Samvelyan · Riashat Islam · Murtaza Dalal · Luke Harries · Andrey Kurenkov · Konrad Żołna · Sudeep Dasari · Kristian Hartikainen · Ofir Nachum · Kimin Lee · Markus Holzleitner · Vu Nguyen · Francis Song · Christopher Grimm · Felipe Leno da Silva · Yuping Luo · Yifan Wu · Alex Lee · Thomas Paine · Wei-Yang Qu · Daniel Graves · Yannis Flet-Berliac · Yunhao Tang · Suraj Nair · Matthew Hausknecht · Akhil Bagaria · Simon Schmitt · Bowen Baker · Paavo Parmas · Benjamin Eysenbach · Lisa Lee · Siyu Lin · Daniel Seita · Abhishek Gupta · Riley Simmons-Edler · Yijie Guo · Kevin Corder · Vikash Kumar · Scott Fujimoto · Adam Lerer · Ignasi Clavera Gilaberte · Nicholas Rhinehart · Ashvin Nair · Ge Yang · Lingxiao Wang · Sungryull Sohn · J. Fernando Hernandez-Garcia · Xian Yeow Lee · Rupesh Srivastava · Khimya Khetarpal · Chenjun Xiao · Luckeciano Carvalho Melo · Rishabh Agarwal · Tianhe Yu · Glen Berseth · Devendra Singh Chaplot · Jie Tang · Anirudh Srinivasan · Tharun Kumar Reddy Medini · Aaron Havens · Misha Laskin · Asier Mujika · Rohan Saphal · Joseph Marino · Alex Ray · Joshua Achiam · Ajay Mandlekar · Zhuang Liu · Danijar Hafner · Zhiwen Tang · Ted Xiao · Michael Walton · Jeff Druce · Ferran Alet · Zhang-Wei Hong · Stephanie Chan · Anusha Nagabandi · Hao Liu · Hao Sun · Ge Liu · Dinesh Jayaraman · John Co-Reyes · Sophia Sanborn -
2019 Workshop: Deep Reinforcement Learning »
Pieter Abbeel · Chelsea Finn · Joelle Pineau · David Silver · Satinder Singh · Joshua Achiam · Carlos Florensa · Christopher Grimm · Haoran Tang · Vivek Veeriah -
2019 : Coffee/Poster session 2 »
Xingyou Song · Puneet Mangla · David Salinas · Zhenxun Zhuang · Leo Feng · Shell Xu Hu · Raul Puri · Wesley Maddox · Aniruddh Raghu · Prudencio Tossou · Mingzhang Yin · Ishita Dasgupta · Kangwook Lee · Ferran Alet · Zhen Xu · Jörg Franke · James Harrison · Jonathan Warrell · Guneet Dhillon · Arber Zela · Xin Qiu · Julien Niklas Siems · Russell Mendonca · Louis Schlessinger · Jeffrey Li · Georgiana Manolache · Debojyoti Dutta · Lucas Glass · Abhishek Singh · Gregor Koehler -
2019 : Pieter Abbeel »
Pieter Abbeel -
2019 : Poster Session »
Ethan Harris · Tom White · Oh Hyeon Choung · Takashi Shinozaki · Dipan Pal · Katherine L. Hermann · Judy Borowski · Camilo Fosco · Chaz Firestone · Vijay Veerabadran · Benjamin Lahner · Chaitanya Ryali · Fenil Doshi · Pulkit Singh · Sharon Zhou · Michel Besserve · Michael Chang · Anelise Newman · Mahesan Niranjan · Jonathon Hare · Daniela Mihai · Marios Savvides · Simon Kornblith · Christina M Funke · Aude Oliva · Virginia de Sa · Dmitry Krotov · Colin Conwell · George Alvarez · Alex Kolchinski · Shengjia Zhao · Mitchell Gordon · Michael Bernstein · Stefano Ermon · Arash Mehrjou · Bernhard Schölkopf · John Co-Reyes · Michael Janner · Jiajun Wu · Josh Tenenbaum · Sergey Levine · Yalda Mohsenzadeh · Zhenglong Zhou -
2019 Poster: Wasserstein Dependency Measure for Representation Learning »
Sherjil Ozair · Corey Lynch · Yoshua Bengio · Aaron van den Oord · Sergey Levine · Pierre Sermanet -
2019 Poster: Evaluating Protein Transfer Learning with TAPE »
Roshan Rao · Nicholas Bhattacharya · Neil Thomas · Yan Duan · Peter Chen · John Canny · Pieter Abbeel · Yun Song -
2019 Spotlight: Evaluating Protein Transfer Learning with TAPE »
Roshan Rao · Nicholas Bhattacharya · Neil Thomas · Yan Duan · Peter Chen · John Canny · Pieter Abbeel · Yun Song -
2019 Poster: Planning with Goal-Conditioned Policies »
Soroush Nasiriany · Vitchyr Pong · Steven Lin · Sergey Levine -
2019 Poster: Search on the Replay Buffer: Bridging Planning and Reinforcement Learning »
Benjamin Eysenbach · Russ Salakhutdinov · Sergey Levine -
2019 Poster: Goal-conditioned Imitation Learning »
Yiming Ding · Carlos Florensa · Pieter Abbeel · Mariano Phielipp -
2019 Poster: Geometry-Aware Neural Rendering »
Joshua Tobin · Wojciech Zaremba · Pieter Abbeel -
2019 Poster: MCP: Learning Composable Hierarchical Control with Multiplicative Compositional Policies »
Xue Bin Peng · Michael Chang · Grace Zhang · Pieter Abbeel · Sergey Levine -
2019 Poster: Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction »
Aviral Kumar · Justin Fu · George Tucker · Sergey Levine -
2019 Poster: Unsupervised Curricula for Visual Meta-Reinforcement Learning »
Allan Jabri · Kyle Hsu · Abhishek Gupta · Benjamin Eysenbach · Sergey Levine · Chelsea Finn -
2019 Oral: Geometry-Aware Neural Rendering »
Joshua Tobin · Wojciech Zaremba · Pieter Abbeel -
2019 Poster: Compositional Plan Vectors »
Coline Devin · Daniel Geng · Pieter Abbeel · Trevor Darrell · Sergey Levine -
2019 Spotlight: Unsupervised Curricula for Visual Meta-Reinforcement Learning »
Allan Jabri · Kyle Hsu · Abhishek Gupta · Benjamin Eysenbach · Sergey Levine · Chelsea Finn -
2019 Poster: Causal Confusion in Imitation Learning »
Pim de Haan · Dinesh Jayaraman · Sergey Levine -
2019 Poster: Meta-Learning with Implicit Gradients »
Aravind Rajeswaran · Chelsea Finn · Sham Kakade · Sergey Levine -
2019 Poster: On the Utility of Learning about Humans for Human-AI Coordination »
Micah Carroll · Rohin Shah · Mark Ho · Tom Griffiths · Sanjit Seshia · Pieter Abbeel · Anca Dragan -
2019 Poster: When to Trust Your Model: Model-Based Policy Optimization »
Michael Janner · Justin Fu · Marvin Zhang · Sergey Levine -
2019 Poster: Compression with Flows via Local Bits-Back Coding »
Jonathan Ho · Evan Lohn · Pieter Abbeel -
2019 Poster: Guided Meta-Policy Search »
Russell Mendonca · Abhishek Gupta · Rosen Kralev · Pieter Abbeel · Sergey Levine · Chelsea Finn -
2019 Spotlight: Compression with Flows via Local Bits-Back Coding »
Jonathan Ho · Evan Lohn · Pieter Abbeel -
2019 Spotlight: Guided Meta-Policy Search »
Russell Mendonca · Abhishek Gupta · Rosen Kralev · Pieter Abbeel · Sergey Levine · Chelsea Finn -
2019 Oral: Causal Confusion in Imitation Learning »
Pim de Haan · Dinesh Jayaraman · Sergey Levine -
2018 : Meta-Learning to Follow Instructions, Examples, and Demonstrations »
Sergey Levine -
2018 : Pieter Abbeel »
Pieter Abbeel -
2018 : TBA 2 »
Sergey Levine -
2018 : Control as Inference and Soft Deep RL (Sergey Levine) »
Sergey Levine -
2018 : TBC 9 »
Sergey Levine -
2018 Workshop: Deep Reinforcement Learning »
Pieter Abbeel · David Silver · Satinder Singh · Joelle Pineau · Joshua Achiam · Rein Houthooft · Aravind Srinivas -
2018 Poster: Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models »
Kurtland Chua · Roberto Calandra · Rowan McAllister · Sergey Levine -
2018 Spotlight: Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models »
Kurtland Chua · Roberto Calandra · Rowan McAllister · Sergey Levine -
2018 Poster: Probabilistic Model-Agnostic Meta-Learning »
Chelsea Finn · Kelvin Xu · Sergey Levine -
2018 Poster: Learning Plannable Representations with Causal InfoGAN »
Thanard Kurutach · Aviv Tamar · Ge Yang · Stuart Russell · Pieter Abbeel -
2018 Poster: Visual Reinforcement Learning with Imagined Goals »
Ashvin Nair · Vitchyr Pong · Murtaza Dalal · Shikhar Bahl · Steven Lin · Sergey Levine -
2018 Spotlight: Visual Reinforcement Learning with Imagined Goals »
Ashvin Nair · Vitchyr Pong · Murtaza Dalal · Shikhar Bahl · Steven Lin · Sergey Levine -
2018 Poster: Visual Memory for Robust Path Following »
Ashish Kumar · Saurabh Gupta · David Fouhey · Sergey Levine · Jitendra Malik -
2018 Poster: Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition »
Justin Fu · Avi Singh · Dibya Ghosh · Larry Yang · Sergey Levine -
2018 Oral: Visual Memory for Robust Path Following »
Ashish Kumar · Saurabh Gupta · David Fouhey · Sergey Levine · Jitendra Malik -
2018 Poster: Data-Efficient Hierarchical Reinforcement Learning »
Ofir Nachum · Shixiang (Shane) Gu · Honglak Lee · Sergey Levine -
2018 Poster: Evolved Policy Gradients »
Rein Houthooft · Yuhua Chen · Phillip Isola · Bradly Stadie · Filip Wolski · OpenAI Jonathan Ho · Pieter Abbeel -
2018 Spotlight: Evolved Policy Gradients »
Rein Houthooft · Yuhua Chen · Phillip Isola · Bradly Stadie · Filip Wolski · OpenAI Jonathan Ho · Pieter Abbeel -
2018 Poster: Where Do You Think You're Going?: Inferring Beliefs about Dynamics from Behavior »
Sid Reddy · Anca Dragan · Sergey Levine -
2018 Poster: The Importance of Sampling inMeta-Reinforcement Learning »
Bradly Stadie · Ge Yang · Rein Houthooft · Peter Chen · Yan Duan · Yuhuai Wu · Pieter Abbeel · Ilya Sutskever -
2017 : Meta-Learning Shared Hierarchies (Pieter Abbeel) »
Pieter Abbeel -
2017 : Exhausting the Sim with Domain Randomization and Trying to Exhaust the Real World, Pieter Abbeel, UC Berkeley and Embodied Intelligence »
Pieter Abbeel · Gregory Kahn -
2017 Workshop: Workshop on Meta-Learning »
Roberto Calandra · Frank Hutter · Hugo Larochelle · Sergey Levine -
2017 Symposium: Deep Reinforcement Learning »
Pieter Abbeel · Yan Duan · David Silver · Satinder Singh · Junhyuk Oh · Rein Houthooft -
2017 Poster: EX2: Exploration with Exemplar Models for Deep Reinforcement Learning »
Justin Fu · John Co-Reyes · Sergey Levine -
2017 Poster: #Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning »
Haoran Tang · Rein Houthooft · Davis Foote · Adam Stooke · OpenAI Xi Chen · Yan Duan · John Schulman · Filip DeTurck · Pieter Abbeel -
2017 Poster: Inverse Reward Design »
Dylan Hadfield-Menell · Smitha Milli · Pieter Abbeel · Stuart J Russell · Anca Dragan -
2017 Spotlight: EX2: Exploration with Exemplar Models for Deep Reinforcement Learning »
Justin Fu · John Co-Reyes · Sergey Levine -
2017 Oral: Inverse Reward Design »
Dylan Hadfield-Menell · Smitha Milli · Pieter Abbeel · Stuart J Russell · Anca Dragan -
2017 Invited Talk: Deep Learning for Robotics »
Pieter Abbeel -
2017 Demonstration: Deep Robotic Learning using Visual Imagination and Meta-Learning »
Chelsea Finn · Frederik Ebert · Tianhe Yu · Annie Xie · Sudeep Dasari · Pieter Abbeel · Sergey Levine -
2017 Poster: One-Shot Imitation Learning »
Yan Duan · Marcin Andrychowicz · Bradly Stadie · OpenAI Jonathan Ho · Jonas Schneider · Ilya Sutskever · Pieter Abbeel · Wojciech Zaremba -
2017 Poster: Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning »
Shixiang (Shane) Gu · Timothy Lillicrap · Richard Turner · Zoubin Ghahramani · Bernhard Schölkopf · Sergey Levine -
2016 Workshop: Deep Learning for Action and Interaction »
Chelsea Finn · Raia Hadsell · David Held · Sergey Levine · Percy Liang -
2016 : Pieter Abbeel (University of California, Berkeley) »
Pieter Abbeel -
2016 : Sergey Levine (University of California, Berkeley) »
Sergey Levine -
2016 : Invited Talk: Safe Reinforcement Learning for Robotics (Pieter Abbeel, UC Berkeley and OpenAI) »
Pieter Abbeel -
2016 Workshop: Deep Reinforcement Learning »
David Silver · Satinder Singh · Pieter Abbeel · Peter Chen -
2016 Poster: Backprop KF: Learning Discriminative Deterministic State Estimators »
Tuomas Haarnoja · Anurag Ajay · Sergey Levine · Pieter Abbeel -
2016 Poster: Learning to Poke by Poking: Experiential Learning of Intuitive Physics »
Pulkit Agrawal · Ashvin Nair · Pieter Abbeel · Jitendra Malik · Sergey Levine -
2016 Oral: Learning to Poke by Poking: Experiential Learning of Intuitive Physics »
Pulkit Agrawal · Ashvin Nair · Pieter Abbeel · Jitendra Malik · Sergey Levine -
2016 Poster: Combinatorial Energy Learning for Image Segmentation »
Jeremy Maitin-Shepard · Viren Jain · Michal Januszewski · Peter Li · Pieter Abbeel -
2016 Poster: InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets »
Xi Chen · Peter Chen · Yan Duan · Rein Houthooft · John Schulman · Ilya Sutskever · Pieter Abbeel -
2016 Poster: VIME: Variational Information Maximizing Exploration »
Rein Houthooft · Xi Chen · Peter Chen · Yan Duan · John Schulman · Filip De Turck · Pieter Abbeel -
2016 Poster: Value Iteration Networks »
Aviv Tamar · Sergey Levine · Pieter Abbeel · YI WU · Garrett Thomas -
2016 Oral: Value Iteration Networks »
Aviv Tamar · Sergey Levine · Pieter Abbeel · YI WU · Garrett Thomas -
2016 Poster: Cooperative Inverse Reinforcement Learning »
Dylan Hadfield-Menell · Stuart J Russell · Pieter Abbeel · Anca Dragan -
2016 Tutorial: Deep Reinforcement Learning Through Policy Optimization »
Pieter Abbeel · John Schulman -
2015 : Deep Robotic Learning »
Sergey Levine -
2015 Workshop: Deep Reinforcement Learning »
Pieter Abbeel · John Schulman · Satinder Singh · David Silver -
2015 Poster: Gradient Estimation Using Stochastic Computation Graphs »
John Schulman · Nicolas Heess · Theophane Weber · Pieter Abbeel -
2014 Workshop: Novel Trends and Applications in Reinforcement Learning »
Csaba Szepesvari · Marc Deisenroth · Sergey Levine · Pedro Ortega · Brian Ziebart · Emma Brunskill · Naftali Tishby · Gerhard Neumann · Daniel Lee · Sridhar Mahadevan · Pieter Abbeel · David Silver · Vicenç Gómez -
2014 Poster: Learning Neural Network Policies with Guided Policy Search under Unknown Dynamics »
Sergey Levine · Pieter Abbeel -
2014 Spotlight: Learning Neural Network Policies with Guided Policy Search under Unknown Dynamics »
Sergey Levine · Pieter Abbeel -
2013 Poster: Variational Policy Search via Trajectory Optimization »
Sergey Levine · Vladlen Koltun -
2012 Poster: Near Optimal Chernoff Bounds for Markov Decision Processes »
Teodor Mihai Moldovan · Pieter Abbeel -
2012 Spotlight: Near Optimal Chernoff Bounds for Markov Decision Processes »
Teodor Mihai Moldovan · Pieter Abbeel -
2010 Spotlight: On a Connection between Importance Sampling and the Likelihood Ratio Policy Gradient »
Jie Tang · Pieter Abbeel -
2010 Poster: Feature Construction for Inverse Reinforcement Learning »
Sergey Levine · Zoran Popovic · Vladlen Koltun -
2010 Poster: On a Connection between Importance Sampling and the Likelihood Ratio Policy Gradient »
Jie Tang · Pieter Abbeel -
2007 Spotlight: Hierarchical Apprenticeship Learning with Application to Quadruped Locomotion »
J. Zico Kolter · Pieter Abbeel · Andrew Y Ng -
2007 Poster: Hierarchical Apprenticeship Learning with Application to Quadruped Locomotion »
J. Zico Kolter · Pieter Abbeel · Andrew Y Ng -
2006 Poster: Max-margin classification of incomplete data »
Gal Chechik · Geremy Heitz · Gal Elidan · Pieter Abbeel · Daphne Koller -
2006 Spotlight: Max-margin classification of incomplete data »
Gal Chechik · Geremy Heitz · Gal Elidan · Pieter Abbeel · Daphne Koller -
2006 Poster: An Application of Reinforcement Learning to Aerobatic Helicopter Flight »
Pieter Abbeel · Adam P Coates · Andrew Y Ng · Morgan Quigley -
2006 Talk: An Application of Reinforcement Learning to Aerobatic Helicopter Flight »
Pieter Abbeel · Adam P Coates · Andrew Y Ng · Morgan Quigley