Timezone: »

 
Characterising the Robustness of Reinforcement Learning for Continuous Control using Disturbance Injection
Catherine Glossop · Jacopo Panerati · Amrit Krishnan · Zhaocong Yuan · Angela Schoellig
Event URL: https://openreview.net/forum?id=dJPzobZtpZ »

In this study, we leverage the deliberate and systematic fault-injection capabilities of an open-source benchmark suite to perform a series of experiments on state-of-the-art deep and robust reinforcement learning algorithms.We aim to benchmark robustness in the context of continuous action spaces---crucial for deployment in robot control.We find that robustness is more prominent for action disturbances than it is for disturbances to observations and dynamics. We also observe that state-of-the-art approaches that are not explicitly designed to improve robustness perform at a level comparable to that achieved by those that are.Our study and results are intended to provide insight into the current state of safe and robust reinforcement learning and a foundation for the advancement of the field, in particular, for deployment in robotic systems.

Author Information

Catherine Glossop (University of Toronto)
Jacopo Panerati (University of Toronto)
Amrit Krishnan (Vector Institute)
Amrit Krishnan

Hi, I'm Amrit. I work at the Vector Institute based on Toronto. My interests lie in the intersection of health and ML.

Zhaocong Yuan (University of Toronto)

I am a MASc student in RL and Robotics at the University of Toronto, supervised by Prof. Angela Schoellig at Dynamic Systems Lab (DSL), also part of Vector Institute and UofT Robotics Institute. I received my BASc degree from Engineering Science (Robotics), UofT. Before joining DSL, I interned in Apple Siri team in Seattle and Nvidia Toronto AI lab led by Prof. Sanja Fidler. I also spent time as a research student at Data-Driven Decision Making Lab led by Prof. Scott Sanner.

Angela Schoellig (University of Toronto, Vector Institute)

More from the Same Authors