Timezone: »
In this paper, we revisit and improve the convergence of policy gradient (PG), natural PG (NPG) methods, and their variance-reduced variants, under general smooth policy parametrizations. More specifically, with the Fisher information matrix of the policy being positive definite: i) we show that a state-of-the-art variance-reduced PG method, which has only been shown to converge to stationary points, converges to the globally optimal value up to some inherent function approximation error due to policy parametrization; ii) we show that NPG enjoys a lower sample complexity; iii) we propose SRVR-NPG, which incorporates variance-reduction into the NPG update. Our improvements follow from an observation that the convergence of (variance-reduced) PG and NPG methods can improve each other: the stationary convergence analysis of PG can be applied on NPG as well, and the global convergence analysis of NPG can help to establish the global convergence of (variance-reduced) PG methods. Our analysis carefully integrates the advantages of these two lines of works. Thanks to this improvement, we have also made variance-reduction for NPG possible for the first time, with both global convergence and an efficient finite-sample complexity.
Author Information
Yanli Liu (UCLA)
Kaiqing Zhang (University of Illinois at Urbana-Champaign (UIUC))
Tamer Basar (University of Illinois at Urbana-Champaign)
Wotao Yin (Alibaba US, DAMO Academy)
More from the Same Authors
-
2020 Poster: An Improved Analysis of Stochastic Gradient Descent with Momentum »
Yanli Liu · Yuan Gao · Wotao Yin -
2020 Poster: POLY-HOOT: Monte-Carlo Planning in Continuous Space MDPs with Non-Asymptotic Analysis »
Weichao Mao · Kaiqing Zhang · Qiaomin Xie · Tamer Basar -
2020 Poster: Provably Efficient Exploration for Reinforcement Learning Using Unsupervised Learning »
Fei Feng · Ruosong Wang · Wotao Yin · Simon Du · Lin Yang -
2020 Poster: On the Stability and Convergence of Robust Adversarial Reinforcement Learning: A Case Study on Linear Quadratic Systems »
Kaiqing Zhang · Bin Hu · Tamer Basar -
2020 Poster: Robust Multi-Agent Reinforcement Learning with Model Uncertainty »
Kaiqing Zhang · TAO SUN · Yunzhe Tao · Sahika Genc · Sunil Mallya · Tamer Basar -
2020 Poster: Natural Policy Gradient Primal-Dual Method for Constrained Markov Decision Processes »
Dongsheng Ding · Kaiqing Zhang · Tamer Basar · Mihailo Jovanovic -
2020 Poster: Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal Sample Complexity »
Kaiqing Zhang · Sham Kakade · Tamer Basar · Lin Yang -
2020 Spotlight: Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal Sample Complexity »
Kaiqing Zhang · Sham Kakade · Tamer Basar · Lin Yang -
2020 Spotlight: Provably Efficient Exploration for Reinforcement Learning Using Unsupervised Learning »
Fei Feng · Ruosong Wang · Wotao Yin · Simon Du · Lin Yang -
2019 Poster: Policy Optimization Provably Converges to Nash Equilibria in Zero-Sum Linear Quadratic Games »
Kaiqing Zhang · Zhuoran Yang · Tamer Basar -
2019 Poster: Non-Cooperative Inverse Reinforcement Learning »
Xiangyuan Zhang · Kaiqing Zhang · Erik Miehling · Tamer Basar -
2018 Poster: Breaking the Span Assumption Yields Fast Finite-Sum Minimization »
Robert Hannah · Yanli Liu · Daniel O'Connor · Wotao Yin