`

Timezone: »

 
Poster
Learning in two-player zero-sum partially observable Markov games with perfect recall
Tadashi Kozuno · Pierre Ménard · Remi Munos · Michal Valko

Thu Dec 09 08:30 AM -- 10:00 AM (PST) @ None #None
We study the problem of learning a Nash equilibrium (NE) in an extensive game with imperfect information (EGII) through self-play. Precisely, we focus on two-player, zero-sum, episodic, tabular EGII under the \textit{perfect-recall} assumption where the only feedback is realizations of the game (bandit feedback). In particular the \textit{dynamics of the EGII is not known}---we can only access it by sampling or interacting with a game simulator. For this learning setting, we provide the Implicit Exploration Online Mirror Descent (IXOMD) algorithm. It is a model-free algorithm with a high-probability bound on convergence rate to the NE of order $1/\sqrt{T}$ where~$T$ is the number of played games. Moreover IXOMD is computationally efficient as it needs to perform the updates only along the sampled trajectory.

Author Information

Tadashi Kozuno (University of Alberta)

Tadashi Kozuno is a postdoc at the University of Alberta. He obtained bachelor and master degrees on neuroscience from Osaka university, and a PhD degree from Okinawa Inst. of Sci. and Tech. His main interest lies in efficient decision making from both theoretical and biological sides.

Pierre Ménard (Magdeburg University)
Remi Munos (DeepMind)
Michal Valko (DeepMind Paris / Inria / ENS Paris-Saclay)

Michal is a research scientist in DeepMind Paris and SequeL team at Inria Lille - Nord Europe, France, lead by Philippe Preux and Rémi Munos. He also teaches the course Graphs in Machine Learning at l'ENS Cachan. Michal is primarily interested in designing algorithms that would require as little human supervision as possible. This means 1) reducing the “intelligence” that humans need to input into the system and 2) minimising the data that humans need spend inspecting, classifying, or “tuning” the algorithms. Another important feature of machine learning algorithms should be the ability to adapt to changing environments. That is why he is working in domains that are able to deal with minimal feedback, such as semi-supervised learning, bandit algorithms, and anomaly detection. The common thread of Michal's work has been adaptive graph-based learning and its application to the real world applications such as recommender systems, medical error detection, and face recognition. His industrial collaborators include Intel, Technicolor, and Microsoft Research. He received his PhD in 2011 from University of Pittsburgh under the supervision of Miloš Hauskrecht and after was a postdoc of Rémi Munos.

More from the Same Authors