Preference Graphs and the Attractors of Regularized Learning in Games
Abstract
We investigate the asymptotic behavior of regularized learning in finite games in relation to the game's preference graph, a combinatorial structure encoding profitable unilateral deviations. While it is known that strict Nash equilibria are precisely the stable and attracting points of regularized learning, it is not known what happens in their absence, or what other stable outcomes may arise in the long run. Motivated by this, we shift our perspective from pointwise solution concepts, like Nash equilibria, to setwise attractors, and we examine how the game's preference graph encodes information about the attractors of regularized learning. Concretely, we show that every attractor of follow-the-regularized-leader necessarily contains a set of pure profiles closed under (weakly) profitable deviations, and that minimal attractors always exist, each containing in particular a strongly connected set of the preference graph that is closed in this sense. Moreover, we show that any strongly connected set of pure profiles spans a set of mixed profiles which is, in a precise sense, connected with respect to the dynamics. Finally, we establish a strong correspondence between combinatorial and dynamical stability in the special case of subgames, which geometrically correspond to faces of the strategy space.