## Learning in two-player zero-sum partially observable Markov games with perfect recall

### Tadashi Kozuno · Pierre Ménard · Remi Munos · Michal Valko

Keywords: [ Reinforcement Learning and Planning ] [ Online Learning ] [ Bandits ]

[ Abstract ]
[
Thu 9 Dec 8:30 a.m. PST — 10 a.m. PST

Abstract: We study the problem of learning a Nash equilibrium (NE) in an extensive game with imperfect information (EGII) through self-play. Precisely, we focus on two-player, zero-sum, episodic, tabular EGII under the \textit{perfect-recall} assumption where the only feedback is realizations of the game (bandit feedback). In particular the \textit{dynamics of the EGII is not known}---we can only access it by sampling or interacting with a game simulator. For this learning setting, we provide the Implicit Exploration Online Mirror Descent (IXOMD) algorithm. It is a model-free algorithm with a high-probability bound on convergence rate to the NE of order $1/\sqrt{T}$ where~$T$ is the number of played games. Moreover IXOMD is computationally efficient as it needs to perform the updates only along the sampled trajectory.

Chat is not available.