In recent years, the use of deep neural networks as function approximators has enabled researchers to extend reinforcement learning techniques to solve increasingly complex control tasks. The emerging field of deep reinforcement learning has led to remarkable empirical results in rich and varied domains like robotics, strategy games, and multiagent interactions. This workshop will bring together researchers working at the intersection of deep learning and reinforcement learning, and it will help interested researchers outside of the field gain a high-level view about the current state of the art and potential directions for future contributions.
Fri 8:30 a.m. - 9:00 a.m.
|
Invited talk: PierreYves Oudeyer "Machines that invent their own problems: Towards open-ended learning of skills"
(Talk)
SlidesLive Video » |
Pierre-Yves Oudeyer 🔗 |
Fri 9:00 a.m. - 9:15 a.m.
|
Contributed Talk: Learning Functionally Decomposed Hierarchies for Continuous Control Tasks with Path Planning
(Talk)
SlidesLive Video » |
Sammy Christen · Lukas Jendele · Emre Aksan · Otmar Hilliges 🔗 |
Fri 9:15 a.m. - 9:30 a.m.
|
Contributed Talk: Maximum Reward Formulation In Reinforcement Learning
(Talk)
SlidesLive Video » |
Vijaya Sai Krishna Gottipati · Yashaswi Pathak · Rohan Nuttall · Sahir . · Raviteja Chunduru · Ahmed Touati · Sriram Ganapathi · Matthew Taylor · Sarath Chandar 🔗 |
Fri 9:30 a.m. - 9:45 a.m.
|
Contributed Talk: Accelerating Reinforcement Learning with Learned Skill Priors
(Talk)
SlidesLive Video » |
Karl Pertsch · Youngwoon Lee · Joseph Lim 🔗 |
Fri 9:45 a.m. - 10:00 a.m.
|
Contributed Talk: Asymmetric self-play for automatic goal discovery in robotic manipulation
(Talk)
SlidesLive Video » |
OpenAI Robotics · Matthias Plappert · Raul Sampedro · Tao Xu · Ilge Akkaya · Vineet Kosaraju · Peter Welinder · Ruben D'Sa · Arthur Petron · Henrique Ponde · Alex Paino · Hyeonwoo Noh Noh · Lilian Weng · Qiming Yuan · Casey Chu · Wojciech Zaremba
|
Fri 10:00 a.m. - 10:30 a.m.
|
Invited talk: Marc Bellemare "Autonomous navigation of stratospheric balloons using reinforcement learning"
(Talk)
|
Marc Bellemare 🔗 |
Fri 10:30 a.m. - 11:00 a.m.
|
Break
|
🔗 |
Fri 11:00 a.m. - 11:30 a.m.
|
Invited talk: Peter Stone "Grounded Simulation Learning for Sim2Real with Connections to Off-Policy Reinforcement Learning"
(Talk)
SlidesLive Video » For autonomous robots to operate in the open, dynamically changing world, they will need to be able to learn a robust set of skills from relatively little experience. This talk introduces Grounded Simulation Learning as a way to bridge the so-called reality gap between simulators and the real world in order to enable transfer learning from simulation to a real robot. Grounded Simulation Learning has led to the fastest known stable walk on a widely used humanoid robot. Connections to theoretical advances in off-policy reinforcement learning will be highlighted. |
Peter Stone 🔗 |
Fri 11:30 a.m. - 11:45 a.m.
|
Contributed Talk: Mirror Descent Policy Optimization
(Talk)
SlidesLive Video » |
Manan Tomar · Lior Shani · Yonathan Efroni · Mohammad Ghavamzadeh 🔗 |
Fri 11:45 a.m. - 12:00 p.m.
|
Contributed Talk: Planning from Pixels using Inverse Dynamics Models
(Talk)
SlidesLive Video » |
Keiran Paster · Sheila McIlraith · Jimmy Ba 🔗 |
Fri 12:00 p.m. - 12:30 p.m.
|
Invited talk: Matt Botvinick "Alchemy: A Benchmark Task Distribution for Meta-Reinforcement Learning Research"
(Talk)
SlidesLive Video » |
Matt Botvinick 🔗 |
Fri 12:30 p.m. - 1:30 p.m.
|
Poster session 1 (Poster session) link » | 🔗 |
Fri 1:30 p.m. - 2:00 p.m.
|
Invited talk: Susan Murphy "We used RL but…. Did it work?!"
(Talk)
SlidesLive Video » Digital Healthcare is a growing area of importance in modern healthcare due to its potential in helping individuals improve their behaviors so as to better manage chronic health challenges such as hypertension, mental health, cancer and so on. Digital apps and wearables, observe the user's state via sensors/self-report, deliver treatment actions (reminders, motivational messages, suggestions, social outreach,...) and observe rewards repeatedly on the user across time. This area is seeing increasing interest by RL researchers with the goal of including in the digital app/wearable an RL algorithm that "personalizes" the treatments to the user. But after RL is run on a number of users, how do we know whether the RL algorithm actually personalized the sequential treatments to the user? In this talk we report on our first efforts to address this question after our RL algorithm was deployed on each of 111 individuals with hypertension. |
Susan Murphy 🔗 |
Fri 2:00 p.m. - 2:15 p.m.
|
Contributed Talk: MaxEnt RL and Robust Control
(Talk)
SlidesLive Video » |
Benjamin Eysenbach · Sergey Levine 🔗 |
Fri 2:15 p.m. - 2:30 p.m.
|
Contributed Talk: Reset-Free Lifelong Learning with Skill-Space Planning
(Talk)
SlidesLive Video » |
Kevin Lu · Aditya Grover · Pieter Abbeel · Igor Mordatch 🔗 |
Fri 2:30 p.m. - 3:00 p.m.
|
Invited talk: Anusha Nagabandi "Model-based Deep Reinforcement Learning for Robotic Systems"
(Talk)
SlidesLive Video » Deep learning has shown promising results in robotics, but we are still far from having intelligent systems that can operate in the unstructured settings of the real world, where disturbances, variations, and unobserved factors lead to a dynamic environment. In this talk, we'll see that model-based deep RL can indeed allow for efficient skill acquisition, as well as the ability to repurpose models to solve a variety of tasks. We'll scale up these approaches to enable locomotion with a 6-DoF legged robot on varying terrains in the real world, as well as dexterous manipulation with a 24-DoF anthropomorphic hand in the real world. We then focus on the inevitable mismatch between an agent's training conditions and the test conditions in which it may actually be deployed, thus illuminating the need for adaptive systems. Inspired by the ability of humans and animals to adapt quickly in the face of unexpected changes, we present a meta-learning algorithm within this model-based RL framework to enable online adaptation of large, high-capacity models using only small amounts of data from the new task. These fast adaptation capabilities are seen in both simulation and the real-world, with experiments such as a 6-legged robot adapting online to an unexpected payload or suddenly losing a leg. We will then further extend the capabilities of our robotic systems by enabling the agents to reason directly from raw image observations. Bridging the benefits of representation learning techniques with the adaptation capabilities of meta-RL, we'll present a unified framework for effective meta-RL from images. With robotic arms in the real world that learn peg insertion and ethernet cable insertion to varying targets, we'll see the fast acquisition of new skills, directly from raw image observations in the real world. Finally, this talk will conclude that model-based deep RL provides a framework for making sense of the world, thus allowing for reasoning and adaptation capabilities that are necessary for successful operation in the dynamic settings of the real world. |
Anusha Nagabandi 🔗 |
Fri 3:00 p.m. - 3:30 p.m.
|
Break
|
🔗 |
Fri 3:30 p.m. - 4:00 p.m.
|
Invited talk: Ashley Edwards "Learning Offline from Observation"
(Talk)
SlidesLive Video » A common trope in sci-fi is to have a robot that can quickly solve some problem after watching a person, studying a video, or reading a book. While these settings are (currently) fictional, the benefits are real. Agents that can solve tasks by observing others have the potential to greatly reduce the burden of their human teachers, removing some of the need to hand-specify rewards or goals. In this talk, I consider the question of how an agent can not only learn by observing others, but also how it can learn quickly by training offline before taking any steps in the environment. First, I will describe an approach that trains a latent policy directly from state observations, which can then be quickly mapped to real actions in the agent’s environment. Then I will describe how we can train a novel value function, Q(s,s’), to learn off-policy from observations. Unlike previous imitation from observation approaches, this formulation goes beyond simply imitating and rather enables learning from potentially suboptimal observations. |
Ashley Edwards 🔗 |
Fri 4:00 p.m. - 4:07 p.m.
|
NeurIPS RL Competitions: Flatland challenge
(Talk)
SlidesLive Video » |
Sharada Mohanty 🔗 |
Fri 4:07 p.m. - 4:15 p.m.
|
NeurIPS RL Competitions: Learning to run a power network
(Talk)
SlidesLive Video » |
Antoine Marot 🔗 |
Fri 4:15 p.m. - 4:22 p.m.
|
NeurIPS RL Competitions: Procgen challenge
(Talk)
|
Sharada Mohanty 🔗 |
Fri 4:22 p.m. - 4:30 p.m.
|
NeurIPS RL Competitions: MineRL
(Talk)
SlidesLive Video » |
William Guss · Stephanie Milani 🔗 |
Fri 4:30 p.m. - 5:00 p.m.
|
Invited talk: Karen Liu "Deep Reinforcement Learning for Physical Human-Robot Interaction"
(Talk)
SlidesLive Video » Creating realistic virtual humans has traditionally been considered a research problem in Computer Animation primarily for entertainment applications. With the recent breakthrough in collaborative robots and deep reinforcement learning, accurately modeling human movements and behaviors has become a common challenge also faced by researchers in robotics and artificial intelligence. For example, mobile robots and autonomous vehicles can benefit from training in environments populated with ambulating humans and learning to avoid colliding with them. Healthcare robotics, on the other hand, need to embrace physical contacts and learn to utilize them for enabling human’s activities of daily living. An immediate concern in developing such an autonomous and powered robotic device is the safety of human users during the early development phase when the control policies are still largely suboptimal. Learning from physically simulated humans and environments presents a promising alternative which enables robots to safely make and learn from mistakes without putting real people at risk. However, deploying such policies to interact with people in the real world adds additional complexity to the already challenging sim-to-real transfer problem. In this talk, I will present our current progress on solving the problem of sim-to-real transfer with humans in the environment, actively interacting with the robots through physical contacts. We tackle the problem from two fronts: developing more relevant human models to facilitate robot learning and developing human-aware robot perception and control policies. As an example of contextualizing our research effort, we develop a mobile manipulator to put clothes on people with physical impairments, enabling them to carry out day-to-day tasks and maintain independence. |
Karen Liu 🔗 |
Fri 5:00 p.m. - 6:00 p.m.
|
Panel discussion
|
Pierre-Yves Oudeyer · Marc Bellemare · Peter Stone · Matt Botvinick · Susan Murphy · Anusha Nagabandi · Ashley Edwards · Karen Liu · Pieter Abbeel 🔗 |
Fri 6:00 p.m. - 7:00 p.m.
|
Poster session 2 (Poster session) link » | 🔗 |
-
|
Poster: Planning from Pixels using Inverse Dynamics Models
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Maximum Reward Formulation In Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Reset-Free Lifelong Learning with Skill-Space Planning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Mirror Descent Policy Optimization
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: MaxEnt RL and Robust Control
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Learning Functionally Decomposed Hierarchies for Continuous Control Tasks with Path Planning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Provably Efficient Policy Optimization via Thompson Sampling
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Weighted Bellman Backups for Improved Signal-to-Noise in Q-Updates
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Efficient Competitive Self-Play Policy Optimization
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Asymmetric self-play for automatic goal discovery in robotic manipulation
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Correcting Momentum in Temporal Difference Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Decoupling Exploration and Exploitation in Meta-Reinforcement Learning without Sacrifices
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Diverse Exploration via InfoMax Options
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Model-Based Meta-Reinforcement Learning for Flight with Suspended Payloads
(Poster)
|
🔗 |
-
|
Poster: Parrot: Data-driven Behavioral Priors for Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: C-Learning: Horizon-Aware Cumulative Accessibility Estimation
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Implicit Under-Parameterization Inhibits Data-Efficient Deep Reinforcement Learning
(Poster)
|
🔗 |
-
|
Poster: Data-Efficient Reinforcement Learning with Self-Predictive Representations
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Accelerating Reinforcement Learning with Learned Skill Priors
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: C-Learning: Learning to Achieve Goals via Recursive Classification
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Off-Dynamics Reinforcement Learning: Training for Transfer with Domain Classifiers
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Learning to Reach Goals via Iterated Supervised Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Unified View of Inference-based Off-policy RL: Decoupling Algorithmic and Implemental Source of Performance Gaps
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Learning to Sample with Local and Global Contexts in Experience Replay Buffer
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Adversarial Environment Generation for Learning to Navigate the Web
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Reinforcement Learning for Sparse-Reward Object-Interaction Tasks in First-person Simulated 3D Environments
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: DisCo RL: Distribution-Conditioned Reinforcement Learning for General-Purpose Policies
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Discovery of Options via Meta-Gradients
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: GRAC: Self-Guided and Self-Regularized Actor-Critic
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Harnessing Distribution Ratio Estimators for Learning Agents with Quality and Diversity
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Deep Bayesian Quadrature Policy Gradient
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: PixL2R: Guiding Reinforcement Learning Using Natural Language by Mapping Pixels to Rewards
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: A Policy Gradient Method for Task-Agnostic Exploration
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Contrastive Behavioral Similarity Embeddings for Generalization in Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Skill Transfer via Partially Amortized Hierarchical Planning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: On Effective Parallelization of Monte Carlo Tree Search
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Mastering Atari with Discrete World Models
(Poster)
|
🔗 |
-
|
Poster: Average Reward Reinforcement Learning with Monotonic Policy Improvement
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Combating False Negatives in Adversarial Imitation Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Evaluating Agents Without Rewards
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Learning Latent Landmarks for Generalizable Planning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Conservative Safety Critics for Exploration
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Solving Compositional Reinforcement Learning Problems via Task Reduction
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Deep Q-Learning with Low Switching Cost
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Learning to Represent Action Values as a Hypergraph on the Action Vertices
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Addressing Distribution Shift in Online Reinforcement Learning with Offline Datasets
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: TACTO: A Simulator for Learning Control from Touch Sensing
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Safe Reinforcement Learning with Natural Language Constraints
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Shortest-Path Constrained Reinforcement Learning for Sparse Reward Tasks
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: An Examination of Preference-based Reinforcement Learning for Treatment Recommendation
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Model-based Navigation in Environments with Novel Layouts Using Abstract $n$-D Maps
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Online Safety Assurance for Deep Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Lyapunov Barrier Policy Optimization
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Evolving Reinforcement Learning Algorithms
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Chaining Behaviors from Data with Model-Free Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Pairwise Weights for Temporal Credit Assignment
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Causal Curiosity: RL Agents Discovering Self-supervised Experiments for Causal Representation Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Understanding Learned Reward Functions
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Addressing reward bias in Adversarial Imitation Learning with neutral reward functions
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Reinforcement Learning with Bayesian Classifiers: Efficient Skill Learning from Outcome Examples
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Decoupling Representation Learning from Reinforcement Learning
(Poster)
|
🔗 |
-
|
Poster: Model-Based Reinforcement Learning via Latent-Space Collocation
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: A Variational Inference Perspective on Goal-Directed Behavior in Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: SCC: an efficient deep reinforcement learning agent mastering the game of StarCraft II
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Predictive PER: Balancing Priority and Diversity towards Stable Deep Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Latent State Models for Meta-Reinforcement Learning from Images
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Dream and Search to Control: Latent Space Planning for Continuous Control
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Explanation Augmented Feedback in Human-in-the-Loop Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Goal-Conditioned Reinforcement Learning in the Presence of an Adversary
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Regularized Inverse Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Domain Adversarial Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Safety Aware Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Sample Efficient Training in Multi-Agent AdversarialGames with Limited Teammate Communication
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Amortized Variational Deep Q Network
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Disentangled Planning and Control in Vision Based Robotics via Reward Machines
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Maximum Mutation Reinforcement Learning for Scalable Control
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Unsupervised Task Clustering for Multi-Task Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Learning Intrinsic Symbolic Rewards in Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Preventing Value Function Collapse in Ensemble Q-Learning by Maximizing Representation Diversity
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Action and Perception as Divergence Minimization
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Randomized Ensembled Double Q-Learning: Learning Fast Without a Model
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: D2RL: Deep Dense Architectures in Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Benchmarking Multi-Agent Deep Reinforcement Learning Algorithms
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Discovering Diverse Multi-Agent Strategic Behavior via Reward Randomization
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: What Matters for On-Policy Deep Actor-Critic Methods? A Large-Scale Study
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Semantic State Representation for Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Hyperparameter Auto-tuning in Self-Supervised Robotic Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Targeted Query-based Action-Space Adversarial Policies on Deep Reinforcement Learning Agents
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Abstract Value Iteration for Hierarchical Deep Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Compute- and Memory-Efficient Reinforcement Learning with Latent Experience Replay
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Emergent Road Rules In Multi-Agent Driving Environments
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: An Algorithmic Causal Model of Credit Assignment in Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Learning to Weight Imperfect Demonstrations
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Structure and randomness in planning and reinforcement learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Parameter-based Value Functions
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Influence-aware Memory for Deep Reinforcement Learning in POMDPs
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Modular Training, Integrated Planning Deep Reinforcement Learning for Mobile Robot Navigation
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: How to make Deep RL work in Practice
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Super-Human Performance in Gran Turismo Sport Using Deep Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Which Mutual-Information Representation Learning Objectives are Sufficient for Control?
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Curriculum Learning through Distilled Discriminators
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Self-Supervised Policy Adaptation during Deployment
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Trust, but verify: model-based exploration in sparse reward environments
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Optimizing Traffic Bottleneck Throughput using Cooperative, Decentralized Autonomous Vehicles
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Tonic: A Deep Reinforcement Learning Library for Fast Prototyping and Benchmarking
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Revisiting Rainbow: Promoting more insightful and inclusive deep reinforcement learning research
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Reinforcement Learning with Latent Flow
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Autoregressive Dynamics Models for Offline Policy Evaluation and Optimization
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: AWAC: Accelerating Online Reinforcement Learning With Offline Datasets
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Inter-Level Cooperation in Hierarchical Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Towards Effective Context for Meta-Reinforcement Learning: an Approach based on Contrastive Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Multi-Agent Option Critic Architecture
(Poster)
|
🔗 |
-
|
Poster: Measuring Visual Generalization in Continuous Control from Pixels
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Policy Learning Using Weak Supervision
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Motion Planner Augmented Reinforcement Learning for Robot Manipulation in Obstructed Environments
(Poster)
|
🔗 |
-
|
Poster: Unsupervised Domain Adaptation for Visual Navigation
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Learning Markov State Abstractions for Deep Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Value Generalization among Policies: Improving Value Function with Policy Representation
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Energy-based Surprise Minimization for Multi-Agent Value Factorization
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Backtesting Optimal Trade Execution Policies in Agent-Based Market Simulator
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Successor Landmarks for Efficient Exploration and Long-Horizon Navigation
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Multi-task Reinforcement Learning with a Planning Quasi-Metric
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: R-LAtte: Visual Control via Deep Reinforcement Learning with Attention Network
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Quantifying Differences in Reward Functions
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: DERAIL: Diagnostic Environments for Reward And Imitation Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Exploring Zero-Shot Emergent Communication in Embodied Multi-Agent Populations
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Unlocking the Potential of Deep Counterfactual Value Networks
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: FactoredRL: Leveraging Factored Graphs for Deep Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Reusability and Transferability of Macro Actions for Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Interactive Visualization for Debugging RL
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: A Deep Value-based Policy Search Approach for Real-world Vehicle Repositioning on Mobility-on-Demand Platforms
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Visual Imitation with Reinforcement Learning using Recurrent Siamese Networks
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Learning Accurate Long-term Dynamics for Model-based Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: XLVIN: eXecuted Latent Value Iteration Nets
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Beyond Exponentially Discounted Sum: Automatic Learning of Return Function
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: XT2: Training an X-to-Text Typing Interface with Online Learning from Implicit Feedback
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Greedy Multi-Step Off-Policy Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Variational Empowerment as Representation Learning for Goal-Based Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Robust Domain Randomised Reinforcement Learning through Peer-to-Peer Distillation
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: ReaPER: Improving Sample Efficiency in Model-Based Latent Imagination
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Model-Based Reinforcement Learning: A Compressed Survey
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: BeBold: Exploration Beyond the Boundary of Explored Regions
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Model-Based Visual Planning with Self-Supervised Functional Distances
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Optimizing Memory Placement using Evolutionary Graph Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Utilizing Skipped Frames in Action Repeats via Pseudo-Actions
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Bringing order into Actor-Critic Algorithms usingStackelberg Games
(Poster)
|
🔗 |
-
|
Poster: Continual Model-Based Reinforcement Learning withHypernetworks
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Online Hyper-parameter Tuning in Off-policy Learning via Evolutionary Strategies
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: Policy Guided Planning in Learned Latent Space
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: PettingZoo: Gym for Multi-Agent Reinforcement Learning
(Poster)
SlidesLive Video » |
🔗 |
-
|
Poster: DREAM: Deep Regret minimization with Advantage baselines and Model-free learning
(Poster)
SlidesLive Video » |
🔗 |