Timezone: »

 
Efficiently Improving the Robustness of RL Agents against Strongest Adversaries
Yongyuan Liang · Yanchao Sun · Ruijie Zheng · Furong Huang

Mon Dec 13 09:55 AM -- 10:00 AM (PST) @

It is well known that current deep reinforcement learning (RL) agents are particularly vulnerable under adversarial perturbations. Therefore, it is important to develop a vulnerability-aware algorithm that could improve the performance of the RL agent under any attack with bounded budgets. Existing robust training approaches in deep RL either directly use adversarial training whose attacks are heuristically generated which might be non-optimal, or they need to learn an RL-based strong adversary which doubles the computational and sample complexity of the training process. In this work, we formalize the notion of the lower bound of the policy value under bounded attacks by a proposed worst-case Bellman operator. By directly estimating and improving the worst-case value of an agent under attack, we develop a robust training method that efficiently improves the robustness of RL policies without learning an adversary. Empirical evaluations show that our algorithm universally achieves state-of-the-art performance under strong adversaries with significantly higher efficiency, compared with other robust training methods.

Author Information

Yongyuan Liang (SUN YAT-SEN UNIVERSITY)
Yanchao Sun (University of Maryland, College Park)
Ruijie Zheng (University of Maryland, College Park)
Furong Huang (University of Maryland)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors