Poster
Fast Convergence of Regularized Learning in Games
Vasilis Syrgkanis · Alekh Agarwal · Haipeng Luo · Robert Schapire
210 C #97
[
Abstract
]
Abstract:
We show that natural classes of regularized learning algorithms with a form of recency bias achieve faster convergence rates to approximate efficiency and to coarse correlated equilibria in multiplayer normal form games. When each player in a game uses an algorithm from our class, their individual regret decays at O(T−3/4), while the sum of utilities converges to an approximate optimum at O(T−1)--an improvement upon the worst case O(T−1/2) rates. We show a black-box reduction for any algorithm in the class to achieve ˜O(T−1/2) rates against an adversary, while maintaining the faster rates against algorithms in the class. Our results extend those of Rakhlin and Shridharan~\cite{Rakhlin2013} and Daskalakis et al.~\cite{Daskalakis2014}, who only analyzed two-player zero-sum games for specific algorithms.
Live content is unavailable. Log in and register to view live content