Timezone: »

 
Poster
Independent Policy Gradient Methods for Competitive Reinforcement Learning
Constantinos Daskalakis · Dylan Foster · Noah Golowich

Mon Dec 07 09:00 PM -- 11:00 PM (PST) @ Poster Session 0 #96

We obtain global, non-asymptotic convergence guarantees for independent learning algorithms in competitive reinforcement learning settings with two agents (i.e., zero-sum stochastic games). We consider an episodic setting where in each episode, each player independently selects a policy and observes only their own actions and rewards, along with the state. We show that if both players run policy gradient methods in tandem, their policies will converge to a min-max equilibrium of the game, as long as their learning rates follow a two-timescale rule (which is necessary). To the best of our knowledge, this constitutes the first finite-sample convergence result for independent policy gradient methods in competitive RL; prior work has largely focused on centralized, coordinated procedures for equilibrium computation.

Author Information

Constantinos Daskalakis (MIT)
Dylan Foster (MIT)
Noah Golowich (Massachusetts Institute of Technology)

More from the Same Authors