Skip to yearly menu bar Skip to main content


Policy Gradient for Rectangular Robust Markov Decision Processes

Navdeep Kumar · Esther Derman · Matthieu Geist · Kfir Y. Levy · Shie Mannor

Great Hall & Hall B1+B2 (level 1) #1827
[ ]
Wed 13 Dec 8:45 a.m. PST — 10:45 a.m. PST


Policy gradient methods have become a standard for training reinforcement learning agents in a scalable and efficient manner. However, they do not account for transition uncertainty, whereas learning robust policies can be computationally expensive. In this paper, we introduce robust policy gradient (RPG), a policy-based method that efficiently solves rectangular robust Markov decision processes (MDPs). We provide a closed-form expression for the worst occupation measure. Incidentally, we find that the worst kernel is a rank-one perturbation of the nominal. Combining the worst occupation measure with a robust Q-value estimation yields an explicit form of the robust gradient. Our resulting RPG can be estimated from data with the same time complexity as its non-robust equivalent. Hence, it relieves the computational burden of convex optimization problems required for training robust policies by current policy gradient approaches.

Chat is not available.