Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Machine Learning Safety

On the Robustness of Safe Reinforcement Learning under Observational Perturbations

ZUXIN LIU · Zijian Guo · Zhepeng Cen · Huan Zhang · Jie Tan · Bo Li · DING ZHAO


Abstract:

Safe reinforcement learning (RL) trains a policy to maximize the task reward while satisfying safety constraints. While prior works focus on performance optimality, we find that the optimal solutions of many safe RL problems are not robust and safe against observational perturbations.We formally analyze the unique properties of designing effective state adversarial attackers in the safe RL setting. We show that baseline adversarial attack techniques for standard RL tasks are not always effective for safe RL and proposed two new approaches - one maximizes the cost and the other maximizes the reward. One interesting and counter-intuitive finding is that the maximum reward attack is strong, as it can both induce unsafe behaviors and make the attack stealthy by maintaining the reward.We further propose a more effective adversarial training framework for safe RL and evaluate it via comprehensive experiments (video demos are available at: \url{https://sites.google.com/view/robustsaferl/home).This paper provides a pioneer work to investigate the safety and robustness of RL under observational attacks for future safe RL studies.

Chat is not available.