Skip to yearly menu bar Skip to main content


Poster

No-Regret Learning in Unknown Games with Correlated Payoffs

Pier Giuseppe Sessa · Ilija Bogunovic · Maryam Kamgarpour · Andreas Krause

East Exhibition Hall B + C #6

Keywords: [ Game Theory ] [ Algorithms -> Online Learning; Applications -> Game Playing; Probabilistic Methods -> Gaussian Processes; Theory ] [ Algorithms ] [ Bandit Algorithms ]


Abstract:

We consider the problem of learning to play a repeated multi-agent game with an unknown reward function. Single player online learning algorithms attain strong regret bounds when provided with full information feedback, which unfortunately is unavailable in many real-world scenarios. Bandit feedback alone, i.e., observing outcomes only for the selected action, yields substantially worse performance. In this paper, we consider a natural model where, besides a noisy measurement of the obtained reward, the player can also observe the opponents' actions. This feedback model, together with a regularity assumption on the reward function, allows us to exploit the correlations among different game outcomes by means of Gaussian processes (GPs). We propose a novel confidence-bound based bandit algorithm GP-MW, which utilizes the GP model for the reward function and runs a multiplicative weight (MW) method. We obtain novel kernel-dependent regret bounds that are comparable to the known bounds in the full information setting, while substantially improving upon the existing bandit results. We experimentally demonstrate the effectiveness of GP-MW in random matrix games, as well as real-world problems of traffic routing and movie recommendation. In our experiments, GP-MW consistently outperforms several baselines, while its performance is often comparable to methods that have access to full information feedback.

Live content is unavailable. Log in and register to view live content