Timezone: »

Robust Reinforcement Learning via Adversarial training with Langevin Dynamics
Parameswaran Kamalaruban · Yu-Ting Huang · Ya-Ping Hsieh · Paul Rolland · Cheng Shi · Volkan Cevher

Tue Dec 08 09:00 AM -- 11:00 AM (PST) @ Poster Session 1 #197

We introduce a \emph{sampling} perspective to tackle the challenging task of training robust Reinforcement Learning (RL) agents. Leveraging the powerful Stochastic Gradient Langevin Dynamics, we present a novel, scalable two-player RL algorithm, which is a sampling variant of the two-player policy gradient method. Our algorithm consistently outperforms existing baselines, in terms of generalization across different training and testing conditions, on several MuJoCo environments. Our experiments also show that, even for objective functions that entirely ignore potential environmental shifts, our sampling approach remains highly robust in comparison to standard RL algorithms.

Author Information

Parameswaran Kamalaruban (Alan Turing Institute)
Yu-Ting Huang (EPFL)
Ya-Ping Hsieh (EPFL)
Paul Rolland (EPFL)
Cheng Shi (Unversity of Basel)
Volkan Cevher (EPFL)

More from the Same Authors