Timezone: »
In this study, we leverage the deliberate and systematic fault-injection capabilities of an open-source benchmark suite to perform a series of experiments on state-of-the-art deep and robust reinforcement learning algorithms.We aim to benchmark robustness in the context of continuous action spaces---crucial for deployment in robot control.We find that robustness is more prominent for action disturbances than it is for disturbances to observations and dynamics. We also observe that state-of-the-art approaches that are not explicitly designed to improve robustness perform at a level comparable to that achieved by those that are.Our study and results are intended to provide insight into the current state of safe and robust reinforcement learning and a foundation for the advancement of the field, in particular, for deployment in robotic systems.NOTE: We plan to submit a subset of our results in a shorter 4-page version of this paper to the ``NeurIPS 2022 Workshop on Distribution Shifts (DistShift)''. DistShift does NOT have proceedings and will be held on a different date (Dec. 3) than TEA.
Author Information
Catherine Glossop (University of Toronto)
Jacopo Panerati (University of Toronto)
Amrit Krishnan (Vector Institute)
Hi, I'm Amrit. I work at the Vector Institute based on Toronto. My interests lie in the intersection of health and ML.
Zhaocong Yuan (University of Toronto)
I am a MASc student in RL and Robotics at the University of Toronto, supervised by Prof. Angela Schoellig at Dynamic Systems Lab (DSL), also part of Vector Institute and UofT Robotics Institute. I received my BASc degree from Engineering Science (Robotics), UofT. Before joining DSL, I interned in Apple Siri team in Seattle and Nvidia Toronto AI lab led by Prof. Sanja Fidler. I also spent time as a research student at Data-Driven Decision Making Lab led by Prof. Scott Sanner.
Angela Schoellig (University of Toronto, Vector Institute)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 : Characterising the Robustness of Reinforcement Learning for Continuous Control using Disturbance Injection »
Dates n/a. Room
More from the Same Authors
-
2021 : Tutorial: Safe Learning for Decision Making »
Angela Schoellig · SiQi Zhou · Lukas Brunke · Animesh Garg · Melissa Greeff · Somil Bansal -
2022 : Characterising the Robustness of Reinforcement Learning for Continuous Control using Disturbance Injection »
Catherine Glossop · Jacopo Panerati · Amrit Krishnan · Zhaocong Yuan · Angela Schoellig -
2021 : Concluding Remarks »
Angela Schoellig · Lukas Brunke -
2021 : Invited Speaker Panel »
Sham Kakade · Minmin Chen · Philip Thomas · Angela Schoellig · Barbara Engelhardt · Doina Precup · George Tucker -
2021 : Offline RL for Robotics »
Angela Schoellig -
2021 : Panel A: Deployable Learning Algorithms for Embodied Systems »
Shuran Song · Martin Riedmiller · Nick Roy · Aude G Billard · Angela Schoellig · SiQi Zhou -
2021 Workshop: Deployable Decision Making in Embodied Systems (DDM) »
Angela Schoellig · Animesh Garg · Somil Bansal · SiQi Zhou · Melissa Greeff · Lukas Brunke -
2021 : Opening Remarks & Introduction »
Angela Schoellig · Somil Bansal -
2020 : Keynote: Angela Schoellig »
Angela Schoellig -
2020 : Mini-panel discussion 3 - Prioritizing Real World RL Challenges »
Chelsea Finn · Thomas Dietterich · Angela Schoellig · Anca Dragan · Anusha Nagabandi · Doina Precup -
2020 : Invited Talk: Angela Schoellig »
Angela Schoellig -
2019 : Invited Talk - Angela Schoellig »
Angela Schoellig -
2017 Poster: Safe Model-based Reinforcement Learning with Stability Guarantees »
Felix Berkenkamp · Matteo Turchetta · Angela Schoellig · Andreas Krause