Timezone: »

An Off-policy Policy Gradient Theorem Using Emphatic Weightings
Ehsan Imani · Eric Graves · Martha White

Wed Dec 05 07:45 AM -- 09:45 AM (PST) @ Room 517 AB #167

Policy gradient methods are widely used for control in reinforcement learning, particularly for the continuous action setting. There have been a host of theoretically sound algorithms proposed for the on-policy setting, due to the existence of the policy gradient theorem which provides a simplified form for the gradient. In off-policy learning, however, where the behaviour policy is not necessarily attempting to learn and follow the optimal policy for the given task, the existence of such a theorem has been elusive. In this work, we solve this open problem by providing the first off-policy policy gradient theorem. The key to the derivation is the use of emphatic weightings. We develop a new actor-critic algorithm—called Actor Critic with Emphatic weightings (ACE)—that approximates the simplified gradients provided by the theorem. We demonstrate in a simple counterexample that previous off-policy policy gradient methods—particularly OffPAC and DPG—converge to the wrong solution whereas ACE finds the optimal solution.

Author Information

Ehsan Imani (University of Alberta)
Eric Graves (University of Alberta)
Martha White (University of Alberta)

More from the Same Authors