Timezone: »
This workshop builds connections between different areas of RL centered around the understanding of algorithms and their context. We are interested in questions such as, but not limited to: (i) How can we gauge the complexity of an RL problem?, (ii) Which classes of algorithms can tackle which classes of problems?, and (iii) How can we develop practically applicable guidelines for formulating RL tasks that are tractable to solve? We expect submissions that address these and other related questions through an ecological and data-centric view, pushing forward the limits of our comprehension of the RL problem.
Tue 5:00 a.m. - 5:10 a.m.
|
Introductory Remarks
(
Intro
)
SlidesLive Video » |
🔗 |
Tue 5:10 a.m. - 5:30 a.m.
|
Artificial what?
(
Invited Talk
)
SlidesLive Video » |
Shane Legg 🔗 |
Tue 5:30 a.m. - 5:40 a.m.
|
Shane Legg ( Live Q&A ) link » | 🔗 |
Tue 5:40 a.m. - 6:00 a.m.
|
What makes for an interesting RL problem?
(
Invited Talk
)
SlidesLive Video » |
Joelle Pineau 🔗 |
Tue 6:00 a.m. - 6:10 a.m.
|
Joelle Pineau ( Live Q&A ) link » | 🔗 |
Tue 6:10 a.m. - 6:25 a.m.
|
HyperDQN: A Randomized Exploration Method for Deep Reinforcement Learning
(
Oral
)
SlidesLive Video »
Randomized least-square value iteration (RLSVI) is a provably efficient exploration method. However, it is limited to the case where 1) a good feature is known in advance and 2) this feature is fixed during the training: if otherwise, RLSVI suffers an unbearable computational burden to obtain the posterior samples of the parameter in the $Q$-value function. In this work, we present a practical algorithm named HyperDQN, addressing these two issues under the context of deep reinforcement learning, where the feature changes over iterations. HyperDQN is built on two parametric models: in addition to a non-linear neural network (i.e., base model) that predicts $Q$-values, our method employs a probabilistic hypermodel (i.e., meta model), which outputs the parameter of the base model. When both models are jointly optimized under a specifically designed objective, three purposes can be achieved. First, the hypermodel can generate approximate posterior samples regarding the parameter of the $Q$-value function. As a result, diverse $Q$-value functions are sampled to select exploratory action sequences. This retains the punchline of RLSVI for efficient exploration. Second, a good feature is learned to approximate $Q$-value functions. This addresses limitation 1. Third, the posterior samples of the $Q$-value function can be obtained in a more efficient way than the existing methods, and the changing feature does not affect the efficiency. This deals with limitation 2. On the Atari 2600 suite, after $20$M samples, HyperDQN achieves about $2 \times$ improvements over (double) DQN, the advanced method Bootstrapped DQN, and the SOTA exploration bonus method OB2I. For another challenging task SuperMarioBros, HyperDQN outperforms baselines on $7$ out of $9$ games.
|
Ziniu Li · Yingru Li · Yushun Zhang · Tong Zhang · Zhiquan Luo 🔗 |
Tue 6:25 a.m. - 6:40 a.m.
|
Grounding an Ecological Theory of Artificial Intelligence in Human Evolution
(
Oral
)
SlidesLive Video » Recent advances in Artificial Intelligence (AI) have revived the quest for agents able to acquire an open-ended repertoire of skills. Although this ability is fundamentally related to the characteristics of human intelligence, research in this field rarely considers the processes and ecological conditions that may have guided the emergence of complex cognitive capacities during the evolution of the species. Research in Human Behavioral Ecology (HBE) seeks to understand how the behaviors characterizing human nature can be conceived as adaptive responses to major changes in our ecological niche. In this paper, we propose a framework highlighting the role of environmental complexity in open-ended skill acquisition, grounded in major hypotheses from HBE and recent contributions in Reinforcement learning (RL). We use this framework to highlight fundamental links between the two disciplines, as well as to identify feedback loops that bootstrap ecological complexity and create promising research directions for AI researchers. We also present our first steps towards designing a simulation environment that implements the climate dynamics necessary for studying key HBE hypotheses relating environmental complexity to skill acquisition. |
Eleni Nisioti · Clément Moulin-Frier 🔗 |
Tue 6:40 a.m. - 6:50 a.m.
|
Virtual Coffee Break
link »
Come and join us on the virtual lounge over GatherTown for a small break |
🔗 |
Tue 6:50 a.m. - 7:10 a.m.
|
Sculpting (human-like) AI systems by sculpting their (social) environments
(
Invited Talk
)
SlidesLive Video » |
Pierre-Yves Oudeyer 🔗 |
Tue 7:10 a.m. - 7:20 a.m.
|
Pierre-Yves Oudeyer ( Live Q&A ) link » | 🔗 |
Tue 7:20 a.m. - 7:40 a.m.
|
Towards RL applications in video games and with human users
(
Invited Talk
)
SlidesLive Video » |
Katja Hofmann 🔗 |
Tue 7:40 a.m. - 7:50 a.m.
|
Katja Hofmann ( Live Q&A ) link » | 🔗 |
Tue 7:50 a.m. - 8:05 a.m.
|
Habitat 2.0: Training Home Assistants to Rearrange their Habitat
(
Oral
)
SlidesLive Video » We introduce Habitat 2.0 (H2.0), a simulation platform for training virtual robots in interactive 3D environments and complex physics-enabled scenarios. We make comprehensive contributions to all levels of the embodied AI stack – data, simulation, and benchmark tasks. Specifically, we present: (i) ReplicaCAD: an artist-authored, annotated, reconfigurable 3D dataset of apartments (matching real spaces) with articulated objects (e.g. cabinets and drawers that can open/close); (ii) H2.0: a high-performance physics-enabled 3D simulator with speeds exceeding 25,000 simulation steps per second (850× real-time) on an 8-GPU node, representing 100× speed-ups over prior work; and, (iii) Home Assistant Benchmark (HAB): a suite of common tasks for assistive robots (tidy the house, prepare groceries, set the table) that test a range of mobile manipulation capabilities. These large-scale engineering contributions allow us to systematically compare deep reinforcement learning (RL) at scale and classical sense-plan-act (SPA) pipelines in long-horizon structured tasks, with an emphasis on generalization to new objects, receptacles, and layouts. We find that (1) flat RL policies struggle on HAB compared to hierarchical ones; (2) a hierarchy with independent skills suffers from ‘hand-off problems’, and (3) SPA pipelines are more brittle than RL policies. |
Andrew Szot · Alexander Clegg · Eric Undersander · Erik Wijmans · Yili Zhao · Noah Maestre · Mustafa Mukadam · Oleksandr Maksymets · Aaron Gokaslan · Sameer Dharur · Franziska Meier · Wojciech Galuba · Angel Chang · Zsolt Kira · Vladlen Koltun · Jitendra Malik · Manolis Savva · Dhruv Batra
|
Tue 8:05 a.m. - 8:20 a.m.
|
Embodied Intelligence via Learning and Evolution
(
Contributed Talk
)
SlidesLive Video » |
Agrim Gupta 🔗 |
Tue 8:20 a.m. - 8:40 a.m.
|
A Methodology for RL Environment Research
(
Invited Talk
)
SlidesLive Video » |
Daniel Tanis 🔗 |
Tue 8:40 a.m. - 8:50 a.m.
|
Daniel Tanis ( Live Q&A ) link » | 🔗 |
Tue 8:50 a.m. - 9:00 a.m.
|
Virtual Coffee Break link » | 🔗 |
Tue 9:00 a.m. - 10:00 a.m.
|
Virtual Poster Session ( Poster Session ) link » | 🔗 |
Tue 10:00 a.m. - 10:20 a.m.
|
Environment Capacity
(
Invited Talk
)
SlidesLive Video » |
Benjamin Van Roy 🔗 |
Tue 10:20 a.m. - 10:30 a.m.
|
Benjamin van Roy ( Live Q&A ) link » | 🔗 |
Tue 10:30 a.m. - 10:50 a.m.
|
A Universal Framework for Reinforcement Learning
(
Invited Talk
)
SlidesLive Video » |
Warren Powell 🔗 |
Tue 10:50 a.m. - 11:00 a.m.
|
Warren Powell ( Live Q&A ) link » | 🔗 |
Tue 11:00 a.m. - 11:15 a.m.
|
Representation Learning for Online and Offline RL in Low-rank MDPs
(
Oral
)
SlidesLive Video »
This work studies the question of Representation Learning in RL: how can we learn a compact low-dimensional representation such that on top of the representation we can perform RL procedures such as exploration and exploitation, in a sample efficient manner. We focus on the low-rank Markov Decision Processes (MDPs) where the transition dynamics correspond to a low-rank transition matrix. Unlike prior works that assume the representation is known (e.g., linear MDPs), here we need to learn the representation for the low-rank MDP. We study both the online RL and offline RL settings. For the online setting, operating with the same computational oracles used in FLAMBE (Agarwal et.al), the state-of-art algorithm for learning representations in low-rank MDPs, we propose an algorithm REP-UCB Upper Confidence Bound driven Representation learning for RL), which significantly improves the sample complexity from $\widetilde{O}( A^9 d^7 / (\epsilon^{10} (1-\gamma)^{22}))$ for FLAMBE to $\widetilde{O}( A^4 d^4 / (\epsilon^2 (1-\gamma)^{2}) )$ with $d$ being the rank of the transition matrix (or dimension of the ground truth representation), $A$ being the number of actions, and $\gamma$ being the discounted factor. Notably, REP-UCB is simpler than FLAMBE, as it directly balances the interplay between representation learning, exploration, and exploitation, while FLAMBE is an explore-then-commit style approach and has to perform reward-free exploration step-by-step forward in time. For the offline RL setting, we develop an algorithm that leverages pessimism to learn under a partial coverage condition: our algorithm is able to compete against any policy as long as it is covered by the offline distribution.
|
Masatoshi Uehara · Xuezhou Zhang · Wen Sun 🔗 |
Tue 11:15 a.m. - 11:30 a.m.
|
Understanding the Effects of Dataset Composition on Offline Reinforcement Learning
(
Oral
)
SlidesLive Video » The promise of Offline Reinforcement Learning (RL) lies in learning policies from fixed datasets, without interacting with the environment. Being unable to interact makes the dataset the most essential ingredient of the algorithm, as it directly affects the learned policies. Studies on how the dataset composition influences various Offline RL algorithms are missing. Towards that end, we conducted a comprehensive empirical analysis on the effect of dataset composition towards the performance of Offline RL algorithms for discrete action environments. The performance is studied through two metrics of the datasets, Trajectory Quality (TQ) and State-Action Coverage (SACo). Our analysis suggests that variants of the off-policy Deep-Q-Network family rely on the dataset to exhibit high SACo. Contrary to that, algorithms that constrain the learned policy towards the data generating policy perform well across datasets, if they exhibit high TQ or SACo or both. For datasets with high TQ, Behavior Cloning outperforms or performs similarly to the best Offline RL algorithms. |
Kajetan Schweighofer · Markus Hofmarcher · Marius-Constantin Dinu · Angela Bitto · Philipp Renz · Vihang Patil · Sepp Hochreiter 🔗 |
Tue 11:30 a.m. - 11:50 a.m.
|
Structural Assumptions for Better Generalization in Reinforcement Learning
(
Invited Talk
)
SlidesLive Video » |
Amy Zhang 🔗 |
Tue 11:50 a.m. - 12:00 p.m.
|
Amy Zhang ( Live Q&A ) link » | 🔗 |
Tue 12:00 p.m. - 12:10 p.m.
|
Virtual Coffee Break link » | 🔗 |
Tue 12:10 p.m. - 12:30 p.m.
|
Reinforcement learning: It's all in the mind
(
Invited Talk
)
SlidesLive Video » |
Tom Griffiths 🔗 |
Tue 12:30 p.m. - 12:40 p.m.
|
Tom Griffiths ( Live Q&A ) link » | 🔗 |
Tue 12:40 p.m. - 1:00 p.m.
|
Curriculum-based Learning: An Effective Approach for Acquiring Dynamic Skills
(
Invited Talk
)
SlidesLive Video » |
Michiel van de Panne 🔗 |
Tue 1:00 p.m. - 1:10 p.m.
|
Michiel van de Panne ( Live Q&A ) link » | 🔗 |
Tue 1:10 p.m. - 2:00 p.m.
|
Live Panel Discussion
(
Discussion Panel
)
link »
SlidesLive Video » |
🔗 |
Tue 2:00 p.m. - 2:15 p.m.
|
BIG-Gym: A Crowd-Sourcing Challenge for RL Environments and Behaviors
(
Launch
)
SlidesLive Video » |
🔗 |
Tue 2:15 p.m. - 2:20 p.m.
|
Closing Remarks
(
Remarks
)
|
🔗 |
Author Information
Manfred Díaz (Mila, Quebec)
Hiroki Furuta (The University of Tokyo)
Elise van der Pol (University of Amsterdam)
Lisa Lee (Google Brain)
Shixiang (Shane) Gu (Google Brain)
Pablo Samuel Castro (Google)
Simon Du (University of Washington)
Marc Bellemare (Google Brain)
Sergey Levine (UC Berkeley)
More from the Same Authors
-
2020 : GANterpretations »
Pablo Samuel Castro -
2021 Spotlight: Robust Predictable Control »
Ben Eysenbach · Russ Salakhutdinov · Sergey Levine -
2021 Spotlight: Offline Reinforcement Learning as One Big Sequence Modeling Problem »
Michael Janner · Qiyang Li · Sergey Levine -
2021 Spotlight: Pragmatic Image Compression for Human-in-the-Loop Decision-Making »
Sid Reddy · Anca Dragan · Sergey Levine -
2021 : Bridge Data: Boosting Generalization of Robotic Skills with Cross-Domain Datasets »
Frederik Ebert · Yanlai Yang · Karl Schmeckpeper · Bernadette Bucher · Kostas Daniilidis · Chelsea Finn · Sergey Levine -
2021 : Hybrid Imitative Planning with Geometric and Predictive Costs in Offroad Environments »
Dhruv Shah · Daniel Shin · Nick Rhinehart · Ali Agha · David D Fan · Sergey Levine -
2021 : Lifting the veil on hyper-parameters for value-baseddeep reinforcement learning »
João Madeira Araújo · Johan Obando Ceron · Pablo Samuel Castro -
2021 : Extending the WILDS Benchmark for Unsupervised Adaptation »
Shiori Sagawa · Pang Wei Koh · Tony Lee · Irena Gao · Sang Michael Xie · Kendrick Shen · Ananya Kumar · Weihua Hu · Michihiro Yasunaga · Henrik Marklund · Sara Beery · Ian Stavness · Jure Leskovec · Kate Saenko · Tatsunori Hashimoto · Sergey Levine · Chelsea Finn · Percy Liang -
2021 : Test Time Robustification of Deep Models via Adaptation and Augmentation »
Marvin Zhang · Sergey Levine · Chelsea Finn -
2021 : Value Function Spaces: Skill-Centric State Abstractions for Long-Horizon Reasoning »
Dhruv Shah · Ted Xiao · Alexander Toshev · Sergey Levine · brian ichter -
2021 : Data Sharing without Rewards in Multi-Task Offline Reinforcement Learning »
Tianhe Yu · Aviral Kumar · Yevgen Chebotar · Chelsea Finn · Sergey Levine · Karol Hausman -
2021 : Should I Run Offline Reinforcement Learning or Behavioral Cloning? »
Aviral Kumar · Joey Hong · Anikait Singh · Sergey Levine -
2021 : DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization »
Aviral Kumar · Rishabh Agarwal · Tengyu Ma · Aaron Courville · George Tucker · Sergey Levine -
2021 : Distributional Decision Transformer for Offline Hindsight Information Matching »
Hiroki Furuta · Yutaka Matsuo · Shixiang (Shane) Gu -
2021 : Lifting the veil on hyper-parameters for value-baseddeep reinforcement learning »
João Madeira Araújo · Johan Obando Ceron · Pablo Samuel Castro -
2021 : Offline Reinforcement Learning with In-sample Q-Learning »
Ilya Kostrikov · Ashvin Nair · Sergey Levine -
2021 : C-Planning: An Automatic Curriculum for Learning Goal-Reaching Tasks »
Tianjun Zhang · Ben Eysenbach · Russ Salakhutdinov · Sergey Levine · Joseph Gonzalez -
2021 : The Information Geometry of Unsupervised Reinforcement Learning »
Ben Eysenbach · Russ Salakhutdinov · Sergey Levine -
2021 : Mismatched No More: Joint Model-Policy Optimization for Model-Based RL »
Ben Eysenbach · Alexander Khazatsky · Sergey Levine · Russ Salakhutdinov -
2021 : Offline Meta-Reinforcement Learning with Online Self-Supervision »
Vitchyr Pong · Ashvin Nair · Laura Smith · Catherine Huang · Sergey Levine -
2021 : Hybrid Imitative Planning with Geometric and Predictive Costs in Offroad Environments »
Daniel Shin · Dhruv Shah · Ali Agha · Nicholas Rhinehart · Sergey Levine -
2021 : CoMPS: Continual Meta Policy Search »
Glen Berseth · Zhiwei Zhang · Grace Zhang · Chelsea Finn · Sergey Levine -
2021 : Offline Reinforcement Learning with Implicit Q-Learning »
Ilya Kostrikov · Ashvin Nair · Sergey Levine -
2021 : TRAIL: Near-Optimal Imitation Learning with Suboptimal Data »
Mengjiao (Sherry) Yang · Sergey Levine · Ofir Nachum -
2021 : Why so pessimistic? Estimating uncertainties for offline rl through ensembles, and why their independence matters »
Kamyar Ghasemipour · Shixiang (Shane) Gu · Ofir Nachum -
2022 Poster: Provable General Function Class Representation Learning in Multitask Bandits and MDP »
Rui Lu · Andrew Zhao · Simon Du · Gao Huang -
2022 : A Novel Stochastic Gradient Descent Algorithm for LearningPrincipal Subspaces »
Charline Le Lan · Joshua Greaves · Jesse Farebrother · Mark Rowland · Fabian Pedregosa · Rishabh Agarwal · Marc Bellemare -
2022 : Understanding Curriculum Learning in Policy Optimization for Online Combinatorial Optimization »
Runlong Zhou · Yuandong Tian · YI WU · Simon Du -
2022 : You Only Live Once: Single-Life Reinforcement Learning »
Annie Chen · Archit Sharma · Sergey Levine · Chelsea Finn -
2022 : Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement »
Michael Chang · Alyssa L Dayan · Franziska Meier · Tom Griffiths · Sergey Levine · Amy Zhang -
2022 : Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement »
Michael Chang · Alyssa L Dayan · Franziska Meier · Tom Griffiths · Sergey Levine · Amy Zhang -
2022 : Proto-Value Networks: Scaling Representation Learning with Auxiliary Tasks »
Jesse Farebrother · Joshua Greaves · Rishabh Agarwal · Charline Le Lan · Ross Goroshin · Pablo Samuel Castro · Marc Bellemare -
2022 : Offline Q-learning on Diverse Multi-Task Data Both Scales And Generalizes »
Aviral Kumar · Rishabh Agarwal · XINYANG GENG · George Tucker · Sergey Levine -
2022 : Pre-Training for Robots: Leveraging Diverse Multitask Data via Offline Reinforcement Learning »
Aviral Kumar · Anikait Singh · Frederik Ebert · Yanlai Yang · Chelsea Finn · Sergey Levine -
2022 : Offline Reinforcement Learning from Heteroskedastic Data Via Support Constraints »
Anikait Singh · Aviral Kumar · Quan Vuong · Yevgen Chebotar · Sergey Levine -
2022 : Skill Acquisition by Instruction Augmentation on Offline Datasets »
Ted Xiao · Harris Chan · Pierre Sermanet · Ayzaan Wahid · Anthony Brohan · Karol Hausman · Sergey Levine · Jonathan Tompson -
2022 : Control Graph as Unified IO for Morphology-Task Generalization »
Hiroki Furuta · Yusuke Iwasawa · Yutaka Matsuo · Shixiang (Shane) Gu -
2022 : PnP-Nav: Plug-and-Play Policies for Generalizable Visual Navigation Across Robots »
Dhruv Shah · Ajay Sridhar · Arjun Bhorkar · Noriaki Hirose · Sergey Levine -
2022 : Bitrate-Constrained DRO: Beyond Worst Case Robustness To Unknown Group Shifts »
Amrith Setlur · Don Dennis · Benjamin Eysenbach · Aditi Raghunathan · Chelsea Finn · Virginia Smith · Sergey Levine -
2022 : Proto-Value Networks: Scaling Representation Learning with Auxiliary Tasks »
Jesse Farebrother · Joshua Greaves · Rishabh Agarwal · Charline Le Lan · Ross Goroshin · Pablo Samuel Castro · Marc Bellemare -
2022 : Confidence-Conditioned Value Functions for Offline Reinforcement Learning »
Joey Hong · Aviral Kumar · Sergey Levine -
2022 : Efficient Deep Reinforcement Learning Requires Regulating Statistical Overfitting »
Qiyang Li · Aviral Kumar · Ilya Kostrikov · Sergey Levine -
2022 : Contrastive Example-Based Control »
Kyle Hatch · Sarthak J Shetty · Benjamin Eysenbach · Tianhe Yu · Rafael Rafailov · Russ Salakhutdinov · Sergey Levine · Chelsea Finn -
2022 : Offline Reinforcement Learning for Customizable Visual Navigation »
Dhruv Shah · Arjun Bhorkar · Hrishit Leen · Ilya Kostrikov · Nicholas Rhinehart · Sergey Levine -
2022 : Control Graph as Unified IO for Morphology-Task Generalization »
Hiroki Furuta · Yusuke Iwasawa · Yutaka Matsuo · Shixiang (Shane) Gu -
2022 : A Connection between One-Step Regularization and Critic Regularization in Reinforcement Learning »
Benjamin Eysenbach · Matthieu Geist · Sergey Levine · Russ Salakhutdinov -
2022 : Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement »
Michael Chang · Alyssa L Dayan · Franziska Meier · Tom Griffiths · Sergey Levine · Amy Zhang -
2022 : Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement »
Michael Chang · Alyssa L Dayan · Franziska Meier · Tom Griffiths · Sergey Levine · Amy Zhang -
2022 : Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement »
Michael Chang · Alyssa L Dayan · Franziska Meier · Tom Griffiths · Sergey Levine · Amy Zhang -
2022 : Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement »
Michael Chang · Alyssa L Dayan · Franziska Meier · Tom Griffiths · Sergey Levine · Amy Zhang -
2022 : Confidence-Conditioned Value Functions for Offline Reinforcement Learning »
Joey Hong · Aviral Kumar · Sergey Levine -
2022 : Proto-Value Networks: Scaling Representation Learning with Auxiliary Tasks »
Jesse Farebrother · Joshua Greaves · Rishabh Agarwal · Charline Le Lan · Ross Goroshin · Pablo Samuel Castro · Marc Bellemare -
2022 : Efficient Deep Reinforcement Learning Requires Regulating Statistical Overfitting »
Qiyang Li · Aviral Kumar · Ilya Kostrikov · Sergey Levine -
2022 : Pre-Training for Robots: Leveraging Diverse Multitask Data via Offline Reinforcement Learning »
Anikait Singh · Aviral Kumar · Frederik Ebert · Yanlai Yang · Chelsea Finn · Sergey Levine -
2022 : Offline Reinforcement Learning from Heteroskedastic Data Via Support Constraints »
Anikait Singh · Aviral Kumar · Quan Vuong · Yevgen Chebotar · Sergey Levine -
2022 : Variance Double-Down: The Small Batch Size Anomaly in Multistep Deep Reinforcement Learning »
Johan Obando Ceron · Marc Bellemare · Pablo Samuel Castro -
2022 : Sample-Efficient Reinforcement Learning by Breaking the Replay Ratio Barrier »
Pierluca D'Oro · Max Schwarzer · Evgenii Nikishin · Pierre-Luc Bacon · Marc Bellemare · Aaron Courville -
2022 : Adversarial Policies Beat Professional-Level Go AIs »
Tony Wang · Adam Gleave · Nora Belrose · Tom Tseng · Michael Dennis · Yawen Duan · Viktor Pogrebniak · Joseph Miller · Sergey Levine · Stuart J Russell -
2022 : Contrastive Example-Based Control »
Kyle Hatch · Sarthak J Shetty · Benjamin Eysenbach · Tianhe Yu · Rafael Rafailov · Russ Salakhutdinov · Sergey Levine · Chelsea Finn -
2022 : PnP-Nav: Plug-and-Play Policies for Generalizable Visual Navigation Across Robots »
Dhruv Shah · Ajay Sridhar · Arjun Bhorkar · Noriaki Hirose · Sergey Levine -
2022 : Offline Reinforcement Learning for Customizable Visual Navigation »
Dhruv Shah · Arjun Bhorkar · Hrishit Leen · Ilya Kostrikov · Nicholas Rhinehart · Sergey Levine -
2022 : Investigating Multi-task Pretraining and Generalization in Reinforcement Learning »
Adrien Ali Taiga · Rishabh Agarwal · Jesse Farebrother · Aaron Courville · Marc Bellemare -
2022 : Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement »
Michael Chang · Alyssa L Dayan · Franziska Meier · Tom Griffiths · Sergey Levine · Amy Zhang -
2022 : A Connection between One-Step Regularization and Critic Regularization in Reinforcement Learning »
Benjamin Eysenbach · Matthieu Geist · Russ Salakhutdinov · Sergey Levine -
2022 : Simplifying Model-based RL: Learning Representations, Latent-space Models, and Policies with One Objective »
Raj Ghugare · Homanga Bharadhwaj · Benjamin Eysenbach · Sergey Levine · Ruslan Salakhutdinov -
2022 : Adversarial Policies Beat Professional-Level Go AIs »
Tony Wang · Adam Gleave · Nora Belrose · Tom Tseng · Michael Dennis · Yawen Duan · Viktor Pogrebniak · Joseph Miller · Sergey Levine · Stuart Russell -
2022 : Offline Q-learning on Diverse Multi-Task Data Both Scales And Generalizes »
Aviral Kumar · Rishabh Agarwal · XINYANG GENG · George Tucker · Sergey Levine -
2022 Spotlight: Lightning Talks 4A-4 »
Yunhao Tang · LING LIANG · Thomas Chau · Daeha Kim · Junbiao Cui · Rui Lu · Lei Song · Byung Cheol Song · Andrew Zhao · Remi Munos · Łukasz Dudziak · Jiye Liang · Ke Xue · Kaidi Xu · Mark Rowland · Hongkai Wen · Xing Hu · Xiaobin Huang · Simon Du · Nicholas Lane · Chao Qian · Lei Deng · Bernardo Avila Pires · Gao Huang · Will Dabney · Mohamed Abdelfattah · Yuan Xie · Marc Bellemare -
2022 Spotlight: Provable General Function Class Representation Learning in Multitask Bandits and MDP »
Rui Lu · Andrew Zhao · Simon Du · Gao Huang -
2022 Spotlight: The Nature of Temporal Difference Errors in Multi-step Distributional Reinforcement Learning »
Yunhao Tang · Remi Munos · Mark Rowland · Bernardo Avila Pires · Will Dabney · Marc Bellemare -
2022 : Panel RL Benchmarks »
Minmin Chen · Pablo Samuel Castro · Caglar Gulcehre · Tony Jebara · Peter Stone -
2022 Workshop: Broadening Research Collaborations »
Sara Hooker · Rosanne Liu · Pablo Samuel Castro · FatemehSadat Mireshghallah · Sunipa Dev · Benjamin Rosman · João Madeira Araújo · Savannah Thais · Sara Hooker · Sunny Sanyal · Tejumade Afonja · Swapneel Mehta · Tyler Zhu -
2022 : Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement »
Michael Chang · Alyssa L Dayan · Franziska Meier · Tom Griffiths · Sergey Levine · Amy Zhang -
2022 Poster: MEMO: Test Time Robustness via Adaptation and Augmentation »
Marvin Zhang · Sergey Levine · Chelsea Finn -
2022 Poster: When are Offline Two-Player Zero-Sum Markov Games Solvable? »
Qiwen Cui · Simon Du -
2022 Poster: First Contact: Unsupervised Human-Machine Co-Adaptation via Mutual Information Maximization »
Siddharth Reddy · Sergey Levine · Anca Dragan -
2022 Poster: Reincarnating Reinforcement Learning: Reusing Prior Computation to Accelerate Progress »
Rishabh Agarwal · Max Schwarzer · Pablo Samuel Castro · Aaron Courville · Marc Bellemare -
2022 Poster: The Nature of Temporal Difference Errors in Multi-step Distributional Reinforcement Learning »
Yunhao Tang · Remi Munos · Mark Rowland · Bernardo Avila Pires · Will Dabney · Marc Bellemare -
2022 Poster: DASCO: Dual-Generator Adversarial Support Constrained Offline Reinforcement Learning »
Quan Vuong · Aviral Kumar · Sergey Levine · Yevgen Chebotar -
2022 Poster: Learning in Congestion Games with Bandit Feedback »
Qiwen Cui · Zhihan Xiong · Maryam Fazel · Simon Du -
2022 Poster: Adversarial Unlearning: Reducing Confidence Along Adversarial Directions »
Amrith Setlur · Benjamin Eysenbach · Virginia Smith · Sergey Levine -
2022 Poster: Mismatched No More: Joint Model-Policy Optimization for Model-Based RL »
Benjamin Eysenbach · Alexander Khazatsky · Sergey Levine · Russ Salakhutdinov -
2022 Poster: Unpacking Reward Shaping: Understanding the Benefits of Reward Engineering on Sample Complexity »
Abhishek Gupta · Aldo Pacchiano · Yuexiang Zhai · Sham Kakade · Sergey Levine -
2022 Poster: Distributionally Adaptive Meta Reinforcement Learning »
Anurag Ajay · Abhishek Gupta · Dibya Ghosh · Sergey Levine · Pulkit Agrawal -
2022 Poster: You Only Live Once: Single-Life Reinforcement Learning »
Annie Chen · Archit Sharma · Sergey Levine · Chelsea Finn -
2022 Poster: Object Representations as Fixed Points: Training Iterative Refinement Algorithms with Implicit Differentiation »
Michael Chang · Tom Griffiths · Sergey Levine -
2022 Poster: Data-Driven Offline Decision-Making via Invariant Representation Learning »
Han Qi · Yi Su · Aviral Kumar · Sergey Levine -
2022 Poster: Provably Efficient Offline Multi-agent Reinforcement Learning via Strategy-wise Bonus »
Qiwen Cui · Simon Du -
2022 Poster: On Gap-dependent Bounds for Offline Reinforcement Learning »
Xinqi Wang · Qiwen Cui · Simon Du -
2022 Poster: Contrastive Learning as Goal-Conditioned Reinforcement Learning »
Benjamin Eysenbach · Tianjun Zhang · Sergey Levine · Russ Salakhutdinov -
2022 Poster: Near-Optimal Randomized Exploration for Tabular Markov Decision Processes »
Zhihan Xiong · Ruoqi Shen · Qiwen Cui · Maryam Fazel · Simon Du -
2022 Poster: Imitating Past Successes can be Very Suboptimal »
Benjamin Eysenbach · Soumith Udatha · Russ Salakhutdinov · Sergey Levine -
2021 : Retrospective Panel »
Sergey Levine · Nando de Freitas · Emma Brunskill · Finale Doshi-Velez · Nan Jiang · Rishabh Agarwal -
2021 : Invited Talk: Pablo Castro (Google Brain) on Estimating Policy Functions in Payment Systems using Reinforcement Learning »
Pablo Samuel Castro -
2021 : Data-Driven Offline Optimization for Architecting Hardware Accelerators »
Aviral Kumar · Amir Yazdanbakhsh · Milad Hashemi · Kevin Swersky · Sergey Levine -
2021 : Sergey Levine Talk Q&A »
Sergey Levine -
2021 : Opinion Contributed Talk: Sergey Levine »
Sergey Levine -
2021 : Offline Meta-Reinforcement Learning with Online Self-Supervision Q&A »
Vitchyr Pong · Ashvin Nair · Laura Smith · Catherine Huang · Sergey Levine -
2021 : Offline Meta-Reinforcement Learning with Online Self-Supervision »
Vitchyr Pong · Ashvin Nair · Laura Smith · Catherine Huang · Sergey Levine -
2021 : Offline Meta-Reinforcement Learning with Online Self-Supervision »
Vitchyr Pong · Ashvin Nair · Laura Smith · Catherine Huang · Sergey Levine -
2021 : DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization Q&A »
Aviral Kumar · Rishabh Agarwal · Tengyu Ma · Aaron Courville · George Tucker · Sergey Levine -
2021 : DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization »
Aviral Kumar · Rishabh Agarwal · Tengyu Ma · Aaron Courville · George Tucker · Sergey Levine -
2021 Workshop: Distribution shifts: connecting methods and applications (DistShift) »
Shiori Sagawa · Pang Wei Koh · Fanny Yang · Hongseok Namkoong · Jiashi Feng · Kate Saenko · Percy Liang · Sarah Bird · Sergey Levine -
2021 Oral: Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification »
Ben Eysenbach · Sergey Levine · Russ Salakhutdinov -
2021 Poster: Robust Predictable Control »
Ben Eysenbach · Russ Salakhutdinov · Sergey Levine -
2021 Poster: Which Mutual-Information Representation Learning Objectives are Sufficient for Control? »
Kate Rakelly · Abhishek Gupta · Carlos Florensa · Sergey Levine -
2021 Poster: COMBO: Conservative Offline Model-Based Policy Optimization »
Tianhe Yu · Aviral Kumar · Rafael Rafailov · Aravind Rajeswaran · Sergey Levine · Chelsea Finn -
2021 Poster: Outcome-Driven Reinforcement Learning via Variational Inference »
Tim G. J. Rudner · Vitchyr Pong · Rowan McAllister · Yarin Gal · Sergey Levine -
2021 Poster: Bayesian Adaptation for Covariate Shift »
Aurick Zhou · Sergey Levine -
2021 Poster: Offline Reinforcement Learning as One Big Sequence Modeling Problem »
Michael Janner · Qiyang Li · Sergey Levine -
2021 Poster: Pragmatic Image Compression for Human-in-the-Loop Decision-Making »
Sid Reddy · Anca Dragan · Sergey Levine -
2021 Poster: Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification »
Ben Eysenbach · Sergey Levine · Russ Salakhutdinov -
2021 Oral: Deep Reinforcement Learning at the Edge of the Statistical Precipice »
Rishabh Agarwal · Max Schwarzer · Pablo Samuel Castro · Aaron Courville · Marc Bellemare -
2021 Poster: Information is Power: Intrinsic Control via Information Capture »
Nicholas Rhinehart · Jenny Wang · Glen Berseth · John Co-Reyes · Danijar Hafner · Chelsea Finn · Sergey Levine -
2021 Poster: Conservative Data Sharing for Multi-Task Offline Reinforcement Learning »
Tianhe Yu · Aviral Kumar · Yevgen Chebotar · Karol Hausman · Sergey Levine · Chelsea Finn -
2021 Poster: Co-Adaptation of Algorithmic and Implementational Innovations in Inference-based Deep Reinforcement Learning »
Hiroki Furuta · Tadashi Kozuno · Tatsuya Matsushima · Yutaka Matsuo · Shixiang (Shane) Gu -
2021 : Lifting the veil on hyper-parameters for value-baseddeep reinforcement learning »
João Madeira Araújo · Johan Obando Ceron · Pablo Samuel Castro -
2021 Poster: Deep Reinforcement Learning at the Edge of the Statistical Precipice »
Rishabh Agarwal · Max Schwarzer · Pablo Samuel Castro · Aaron Courville · Marc Bellemare -
2021 Poster: Why Generalization in RL is Difficult: Epistemic POMDPs and Implicit Partial Observability »
Dibya Ghosh · Jad Rahme · Aviral Kumar · Amy Zhang · Ryan Adams · Sergey Levine -
2021 Poster: The Difficulty of Passive Learning in Deep Reinforcement Learning »
Georg Ostrovski · Pablo Samuel Castro · Will Dabney -
2021 Poster: MICo: Improved representations via sampling-based state similarity for Markov decision processes »
Pablo Samuel Castro · Tyler Kastner · Prakash Panangaden · Mark Rowland -
2021 Poster: Autonomous Reinforcement Learning via Subgoal Curricula »
Archit Sharma · Abhishek Gupta · Sergey Levine · Karol Hausman · Chelsea Finn -
2021 Poster: Adaptive Risk Minimization: Learning to Adapt to Domain Shift »
Marvin Zhang · Henrik Marklund · Nikita Dhawan · Abhishek Gupta · Sergey Levine · Chelsea Finn -
2020 : Design-Bench: Benchmarks for Data-Driven Offline Model-Based Optimization »
Brandon Trabucco · Aviral Kumar · XINYANG GENG · Sergey Levine -
2020 : Conservative Objective Models: A Simple Approach to Effective Model-Based Optimization »
Brandon Trabucco · Aviral Kumar · XINYANG GENG · Sergey Levine -
2020 : Contributed Talk #3: Contrastive Behavioral Similarity Embeddings for Generalization in Reinforcement Learning »
Rishabh Agarwal · Marlos C. Machado · Pablo Samuel Castro · Marc Bellemare -
2020 : Panel »
Emma Brunskill · Nan Jiang · Nando de Freitas · Finale Doshi-Velez · Sergey Levine · John Langford · Lihong Li · George Tucker · Rishabh Agarwal · Aviral Kumar -
2020 : Panel discussion »
Pierre-Yves Oudeyer · Marc Bellemare · Peter Stone · Matt Botvinick · Susan Murphy · Anusha Nagabandi · Ashley Edwards · Karen Liu · Pieter Abbeel -
2020 : Contributed Talk: MaxEnt RL and Robust Control »
Benjamin Eysenbach · Sergey Levine -
2020 : Invited talk: Marc Bellemare "Autonomous navigation of stratospheric balloons using reinforcement learning" »
Marc Bellemare -
2020 Poster: Weakly-Supervised Reinforcement Learning for Controllable Behavior »
Lisa Lee · Benjamin Eysenbach · Russ Salakhutdinov · Shixiang (Shane) Gu · Chelsea Finn -
2020 Poster: Model Inversion Networks for Model-Based Optimization »
Aviral Kumar · Sergey Levine -
2020 Poster: Continual Learning of Control Primitives : Skill Discovery via Reset-Games »
Kelvin Xu · Siddharth Verma · Chelsea Finn · Sergey Levine -
2020 Poster: Gradient Surgery for Multi-Task Learning »
Tianhe Yu · Saurabh Kumar · Abhishek Gupta · Sergey Levine · Karol Hausman · Chelsea Finn -
2020 Poster: Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement »
Benjamin Eysenbach · XINYANG GENG · Sergey Levine · Russ Salakhutdinov -
2020 Poster: Conservative Q-Learning for Offline Reinforcement Learning »
Aviral Kumar · Aurick Zhou · George Tucker · Sergey Levine -
2020 Oral: Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement »
Benjamin Eysenbach · XINYANG GENG · Sergey Levine · Russ Salakhutdinov -
2020 Tutorial: (Track3) Offline Reinforcement Learning: From Algorithm Design to Practical Applications Q&A »
Sergey Levine · Aviral Kumar -
2020 Poster: Gamma-Models: Generative Temporal Difference Learning for Infinite-Horizon Prediction »
Michael Janner · Igor Mordatch · Sergey Levine -
2020 Poster: One Solution is Not All You Need: Few-Shot Extrapolation via Structured MaxEnt RL »
Saurabh Kumar · Aviral Kumar · Sergey Levine · Chelsea Finn -
2020 Poster: MDP Homomorphic Networks: Group Symmetries in Reinforcement Learning »
Elise van der Pol · Daniel E Worrall · Herke van Hoof · Frans Oliehoek · Max Welling -
2020 Poster: Long-Horizon Visual Planning with Goal-Conditioned Hierarchical Predictors »
Karl Pertsch · Oleh Rybkin · Frederik Ebert · Shenghao Zhou · Dinesh Jayaraman · Chelsea Finn · Sergey Levine -
2020 Poster: Stochastic Latent Actor-Critic: Deep Reinforcement Learning with a Latent Variable Model »
Alex X. Lee · Anusha Nagabandi · Pieter Abbeel · Sergey Levine -
2020 Poster: Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design »
Michael Dennis · Natasha Jaques · Eugene Vinitsky · Alexandre Bayen · Stuart Russell · Andrew Critch · Sergey Levine -
2020 Poster: MOPO: Model-based Offline Policy Optimization »
Tianhe Yu · Garrett Thomas · Lantao Yu · Stefano Ermon · James Zou · Sergey Levine · Chelsea Finn · Tengyu Ma -
2020 Poster: DisCor: Corrective Feedback in Reinforcement Learning via Distribution Correction »
Aviral Kumar · Abhishek Gupta · Sergey Levine -
2020 Spotlight: DisCor: Corrective Feedback in Reinforcement Learning via Distribution Correction »
Aviral Kumar · Abhishek Gupta · Sergey Levine -
2020 Oral: Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design »
Michael Dennis · Natasha Jaques · Eugene Vinitsky · Alexandre Bayen · Stuart Russell · Andrew Critch · Sergey Levine -
2020 Tutorial: (Track3) Offline Reinforcement Learning: From Algorithm Design to Practical Applications »
Sergey Levine · Aviral Kumar -
2019 : Poster and Coffee Break 2 »
Karol Hausman · Kefan Dong · Ken Goldberg · Lihong Li · Lin Yang · Lingxiao Wang · Lior Shani · Liwei Wang · Loren Amdahl-Culleton · Lucas Cassano · Marc Dymetman · Marc Bellemare · Marcin Tomczak · Margarita Castro · Marius Kloft · Marius-Constantin Dinu · Markus Holzleitner · Martha White · Mengdi Wang · Michael Jordan · Mihailo Jovanovic · Ming Yu · Minshuo Chen · Moonkyung Ryu · Muhammad Zaheer · Naman Agarwal · Nan Jiang · Niao He · Nikolaus Yasui · Nikos Karampatziakis · Nino Vieillard · Ofir Nachum · Olivier Pietquin · Ozan Sener · Pan Xu · Parameswaran Kamalaruban · Paul Mineiro · Paul Rolland · Philip Amortila · Pierre-Luc Bacon · Prakash Panangaden · Qi Cai · Qiang Liu · Quanquan Gu · Raihan Seraj · Richard Sutton · Rick Valenzano · Robert Dadashi · Rodrigo Toro Icarte · Roshan Shariff · Roy Fox · Ruosong Wang · Saeed Ghadimi · Samuel Sokota · Sean Sinclair · Sepp Hochreiter · Sergey Levine · Sergio Valcarcel Macua · Sham Kakade · Shangtong Zhang · Sheila McIlraith · Shie Mannor · Shimon Whiteson · Shuai Li · Shuang Qiu · Wai Lok Li · Siddhartha Banerjee · Sitao Luan · Tamer Basar · Thinh Doan · Tianhe Yu · Tianyi Liu · Tom Zahavy · Toryn Klassen · Tuo Zhao · Vicenç Gómez · Vincent Liu · Volkan Cevher · Wesley Suttle · Xiao-Wen Chang · Xiaohan Wei · Xiaotong Liu · Xingguo Li · Xinyi Chen · Xingyou Song · Yao Liu · YiDing Jiang · Yihao Feng · Yilun Du · Yinlam Chow · Yinyu Ye · Yishay Mansour · · Yonathan Efroni · Yongxin Chen · Yuanhao Wang · Bo Dai · Chen-Yu Wei · Harsh Shrivastava · Hongyang Zhang · Qinqing Zheng · SIDDHARTHA SATPATHI · Xueqing Liu · Andreu Vall -
2019 : Poster Presentations »
Rahul Mehta · Andrew Lampinen · Binghong Chen · Sergio Pascual-Diaz · Jordi Grau-Moya · Aldo Faisal · Jonathan Tompson · Yiren Lu · Khimya Khetarpal · Martin Klissarov · Pierre-Luc Bacon · Doina Precup · Thanard Kurutach · Aviv Tamar · Pieter Abbeel · Jinke He · Maximilian Igl · Shimon Whiteson · Wendelin Boehmer · Raphaël Marinier · Olivier Pietquin · Karol Hausman · Sergey Levine · Chelsea Finn · Tianhe Yu · Lisa Lee · Benjamin Eysenbach · Emilio Parisotto · Eric Xing · Ruslan Salakhutdinov · Hongyu Ren · Anima Anandkumar · Deepak Pathak · Christopher Lu · Trevor Darrell · Alexei Efros · Phillip Isola · Feng Liu · Bo Han · Gang Niu · Masashi Sugiyama · Saurabh Kumar · Janith Petangoda · Johan Ferret · James McClelland · Kara Liu · Animesh Garg · Robert Lange -
2019 : Poster Session »
Matthia Sabatelli · Adam Stooke · Amir Abdi · Paulo Rauber · Leonard Adolphs · Ian Osband · Hardik Meisheri · Karol Kurach · Johannes Ackermann · Matt Benatan · GUO ZHANG · Chen Tessler · Dinghan Shen · Mikayel Samvelyan · Riashat Islam · Murtaza Dalal · Luke Harries · Andrey Kurenkov · Konrad Żołna · Sudeep Dasari · Kristian Hartikainen · Ofir Nachum · Kimin Lee · Markus Holzleitner · Vu Nguyen · Francis Song · Christopher Grimm · Felipe Leno da Silva · Yuping Luo · Yifan Wu · Alex Lee · Thomas Paine · Wei-Yang Qu · Daniel Graves · Yannis Flet-Berliac · Yunhao Tang · Suraj Nair · Matthew Hausknecht · Akhil Bagaria · Simon Schmitt · Bowen Baker · Paavo Parmas · Benjamin Eysenbach · Lisa Lee · Siyu Lin · Daniel Seita · Abhishek Gupta · Riley Simmons-Edler · Yijie Guo · Kevin Corder · Vikash Kumar · Scott Fujimoto · Adam Lerer · Ignasi Clavera Gilaberte · Nicholas Rhinehart · Ashvin Nair · Ge Yang · Lingxiao Wang · Sungryull Sohn · J. Fernando Hernandez-Garcia · Xian Yeow Lee · Rupesh Srivastava · Khimya Khetarpal · Chenjun Xiao · Luckeciano Carvalho Melo · Rishabh Agarwal · Tianhe Yu · Glen Berseth · Devendra Singh Chaplot · Jie Tang · Anirudh Srinivasan · Tharun Kumar Reddy Medini · Aaron Havens · Misha Laskin · Asier Mujika · Rohan Saphal · Joseph Marino · Alex Ray · Joshua Achiam · Ajay Mandlekar · Zhuang Liu · Danijar Hafner · Zhiwen Tang · Ted Xiao · Michael Walton · Jeff Druce · Ferran Alet · Zhang-Wei Hong · Stephanie Chan · Anusha Nagabandi · Hao Liu · Hao Sun · Ge Liu · Dinesh Jayaraman · John Co-Reyes · Sophia Sanborn -
2019 : Poster Spotlight 2 »
Aaron Sidford · Mengdi Wang · Lin Yang · Yinyu Ye · Zuyue Fu · Zhuoran Yang · Yongxin Chen · Zhaoran Wang · Ofir Nachum · Bo Dai · Ilya Kostrikov · Dale Schuurmans · Ziyang Tang · Yihao Feng · Lihong Li · Denny Zhou · Qiang Liu · Rodrigo Toro Icarte · Ethan Waldie · Toryn Klassen · Rick Valenzano · Margarita Castro · Simon Du · Sham Kakade · Ruosong Wang · Minshuo Chen · Tianyi Liu · Xingguo Li · Zhaoran Wang · Tuo Zhao · Philip Amortila · Doina Precup · Prakash Panangaden · Marc Bellemare -
2019 : Poster Session »
Ethan Harris · Tom White · Oh Hyeon Choung · Takashi Shinozaki · Dipan Pal · Katherine L. Hermann · Judy Borowski · Camilo Fosco · Chaz Firestone · Vijay Veerabadran · Benjamin Lahner · Chaitanya Ryali · Fenil Doshi · Pulkit Singh · Sharon Zhou · Michel Besserve · Michael Chang · Anelise Newman · Mahesan Niranjan · Jonathon Hare · Daniela Mihai · Marios Savvides · Simon Kornblith · Christina M Funke · Aude Oliva · Virginia de Sa · Dmitry Krotov · Colin Conwell · George Alvarez · Alex Kolchinski · Shengjia Zhao · Mitchell Gordon · Michael Bernstein · Stefano Ermon · Arash Mehrjou · Bernhard Schölkopf · John Co-Reyes · Michael Janner · Jiajun Wu · Josh Tenenbaum · Sergey Levine · Yalda Mohsenzadeh · Zhenglong Zhou -
2019 : Poster Session »
Rishav Chourasia · Yichong Xu · Corinna Cortes · Chien-Yi Chang · Yoshihiro Nagano · So Yeon Min · Benedikt Boecking · Phi Vu Tran · Kamyar Ghasemipour · Qianggang Ding · Shouvik Mani · Vikram Voleti · Rasool Fakoor · Miao Xu · Kenneth Marino · Lisa Lee · Volker Tresp · Jean-Francois Kagy · Marvin Zhang · Barnabas Poczos · Dinesh Khandelwal · Adrien Bardes · Evan Shelhamer · Jiacheng Zhu · Ziming Li · Xiaoyan Li · Dmitrii Krasheninnikov · Ruohan Wang · Mayoore Jaiswal · Emad Barsoum · Suvansh Sanjeev · Theeraphol Wattanavekin · Qizhe Xie · Sifan Wu · Yuki Yoshida · David Kanaa · Sina Khoshfetrat Pakazad · Mehdi Maasoumy -
2019 Workshop: Learning with Rich Experience: Integration of Learning Paradigms »
Zhiting Hu · Andrew Wilson · Chelsea Finn · Lisa Lee · Taylor Berg-Kirkpatrick · Ruslan Salakhutdinov · Eric Xing -
2019 Poster: Wasserstein Dependency Measure for Representation Learning »
Sherjil Ozair · Corey Lynch · Yoshua Bengio · Aaron van den Oord · Sergey Levine · Pierre Sermanet -
2019 Poster: Planning with Goal-Conditioned Policies »
Soroush Nasiriany · Vitchyr Pong · Steven Lin · Sergey Levine -
2019 Poster: Search on the Replay Buffer: Bridging Planning and Reinforcement Learning »
Benjamin Eysenbach · Russ Salakhutdinov · Sergey Levine -
2019 Poster: MCP: Learning Composable Hierarchical Control with Multiplicative Compositional Policies »
Xue Bin Peng · Michael Chang · Grace Zhang · Pieter Abbeel · Sergey Levine -
2019 Poster: Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction »
Aviral Kumar · Justin Fu · George Tucker · Sergey Levine -
2019 Poster: Unsupervised Curricula for Visual Meta-Reinforcement Learning »
Allan Jabri · Kyle Hsu · Abhishek Gupta · Benjamin Eysenbach · Sergey Levine · Chelsea Finn -
2019 Poster: A Geometric Perspective on Optimal Representations for Reinforcement Learning »
Marc Bellemare · Will Dabney · Robert Dadashi · Adrien Ali Taiga · Pablo Samuel Castro · Nicolas Le Roux · Dale Schuurmans · Tor Lattimore · Clare Lyle -
2019 Poster: SMILe: Scalable Meta Inverse Reinforcement Learning through Context-Conditional Policies »
Kamyar Ghasemipour · Shixiang (Shane) Gu · Richard Zemel -
2019 Poster: Language as an Abstraction for Hierarchical Deep Reinforcement Learning »
YiDing Jiang · Shixiang (Shane) Gu · Kevin Murphy · Chelsea Finn -
2019 Poster: Compositional Plan Vectors »
Coline Devin · Daniel Geng · Pieter Abbeel · Trevor Darrell · Sergey Levine -
2019 Spotlight: Unsupervised Curricula for Visual Meta-Reinforcement Learning »
Allan Jabri · Kyle Hsu · Abhishek Gupta · Benjamin Eysenbach · Sergey Levine · Chelsea Finn -
2019 Poster: Causal Confusion in Imitation Learning »
Pim de Haan · Dinesh Jayaraman · Sergey Levine -
2019 Poster: Meta-Learning with Implicit Gradients »
Aravind Rajeswaran · Chelsea Finn · Sham Kakade · Sergey Levine -
2019 Poster: When to Trust Your Model: Model-Based Policy Optimization »
Michael Janner · Justin Fu · Marvin Zhang · Sergey Levine -
2019 Poster: Guided Meta-Policy Search »
Russell Mendonca · Abhishek Gupta · Rosen Kralev · Pieter Abbeel · Sergey Levine · Chelsea Finn -
2019 Spotlight: Guided Meta-Policy Search »
Russell Mendonca · Abhishek Gupta · Rosen Kralev · Pieter Abbeel · Sergey Levine · Chelsea Finn -
2019 Oral: Causal Confusion in Imitation Learning »
Pim de Haan · Dinesh Jayaraman · Sergey Levine -
2019 Poster: Hyperspherical Prototype Networks »
Pascal Mettes · Elise van der Pol · Cees Snoek -
2018 : Live competition The AI Driving Olympics: Supervised Learning approaches »
Manfred Díaz · Julian Zilly -
2018 : Meta-Learning to Follow Instructions, Examples, and Demonstrations »
Sergey Levine -
2018 : TBA 2 »
Sergey Levine -
2018 : Control as Inference and Soft Deep RL (Sergey Levine) »
Sergey Levine -
2018 : TBC 9 »
Sergey Levine -
2018 Poster: Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models »
Kurtland Chua · Roberto Calandra · Rowan McAllister · Sergey Levine -
2018 Spotlight: Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models »
Kurtland Chua · Roberto Calandra · Rowan McAllister · Sergey Levine -
2018 Poster: Probabilistic Model-Agnostic Meta-Learning »
Chelsea Finn · Kelvin Xu · Sergey Levine -
2018 Poster: Meta-Reinforcement Learning of Structured Exploration Strategies »
Abhishek Gupta · Russell Mendonca · YuXuan Liu · Pieter Abbeel · Sergey Levine -
2018 Poster: Visual Reinforcement Learning with Imagined Goals »
Ashvin Nair · Vitchyr Pong · Murtaza Dalal · Shikhar Bahl · Steven Lin · Sergey Levine -
2018 Spotlight: Visual Reinforcement Learning with Imagined Goals »
Ashvin Nair · Vitchyr Pong · Murtaza Dalal · Shikhar Bahl · Steven Lin · Sergey Levine -
2018 Spotlight: Meta-Reinforcement Learning of Structured Exploration Strategies »
Abhishek Gupta · Russell Mendonca · YuXuan Liu · Pieter Abbeel · Sergey Levine -
2018 Poster: Visual Memory for Robust Path Following »
Ashish Kumar · Saurabh Gupta · David Fouhey · Sergey Levine · Jitendra Malik -
2018 Poster: Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition »
Justin Fu · Avi Singh · Dibya Ghosh · Larry Yang · Sergey Levine -
2018 Oral: Visual Memory for Robust Path Following »
Ashish Kumar · Saurabh Gupta · David Fouhey · Sergey Levine · Jitendra Malik -
2018 Poster: Data-Efficient Hierarchical Reinforcement Learning »
Ofir Nachum · Shixiang (Shane) Gu · Honglak Lee · Sergey Levine -
2018 Poster: Where Do You Think You're Going?: Inferring Beliefs about Dynamics from Behavior »
Sid Reddy · Anca Dragan · Sergey Levine -
2017 Workshop: Workshop on Meta-Learning »
Roberto Calandra · Frank Hutter · Hugo Larochelle · Sergey Levine -
2017 Poster: EX2: Exploration with Exemplar Models for Deep Reinforcement Learning »
Justin Fu · John Co-Reyes · Sergey Levine -
2017 Spotlight: EX2: Exploration with Exemplar Models for Deep Reinforcement Learning »
Justin Fu · John Co-Reyes · Sergey Levine -
2017 Demonstration: Deep Robotic Learning using Visual Imagination and Meta-Learning »
Chelsea Finn · Frederik Ebert · Tianhe Yu · Annie Xie · Sudeep Dasari · Pieter Abbeel · Sergey Levine -
2017 Poster: Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning »
Shixiang (Shane) Gu · Timothy Lillicrap · Richard Turner · Zoubin Ghahramani · Bernhard Schölkopf · Sergey Levine -
2016 Workshop: Deep Learning for Action and Interaction »
Chelsea Finn · Raia Hadsell · David Held · Sergey Levine · Percy Liang -
2016 : Sergey Levine (University of California, Berkeley) »
Sergey Levine -
2016 Poster: Unifying Count-Based Exploration and Intrinsic Motivation »
Marc Bellemare · Sriram Srinivasan · Georg Ostrovski · Tom Schaul · David Saxton · Remi Munos -
2016 Poster: Value Iteration Networks »
Aviv Tamar · Sergey Levine · Pieter Abbeel · YI WU · Garrett Thomas -
2016 Oral: Value Iteration Networks »
Aviv Tamar · Sergey Levine · Pieter Abbeel · YI WU · Garrett Thomas -
2016 Poster: Safe and Efficient Off-Policy Reinforcement Learning »
Remi Munos · Tom Stepleton · Anna Harutyunyan · Marc Bellemare -
2015 : Deep Robotic Learning »
Sergey Levine -
2014 Workshop: Novel Trends and Applications in Reinforcement Learning »
Csaba Szepesvari · Marc Deisenroth · Sergey Levine · Pedro Ortega · Brian Ziebart · Emma Brunskill · Naftali Tishby · Gerhard Neumann · Daniel Lee · Sridhar Mahadevan · Pieter Abbeel · David Silver · Vicenç Gómez -
2014 Poster: Learning Neural Network Policies with Guided Policy Search under Unknown Dynamics »
Sergey Levine · Pieter Abbeel -
2014 Spotlight: Learning Neural Network Policies with Guided Policy Search under Unknown Dynamics »
Sergey Levine · Pieter Abbeel -
2013 Poster: Variational Policy Search via Trajectory Optimization »
Sergey Levine · Vladlen Koltun -
2010 Poster: Feature Construction for Inverse Reinforcement Learning »
Sergey Levine · Zoran Popovic · Vladlen Koltun