Timezone: »

 
Poster
Sample-Efficient Learning of Stackelberg Equilibria in General-Sum Games
Yu Bai · Chi Jin · Huan Wang · Caiming Xiong

Wed Dec 08 04:30 PM -- 06:00 PM (PST) @

Real world applications such as economics and policy making often involve solving multi-agent games with two unique features: (1) The agents are inherently asymmetric and partitioned into leaders and followers; (2) The agents have different reward functions, thus the game is general-sum. The majority of existing results in this field focuses on either symmetric solution concepts (e.g. Nash equilibrium) or zero-sum games. It remains open how to learn the Stackelberg equilibrium---an asymmetric analog of the Nash equilibrium---in general-sum games efficiently from noisy samples. This paper initiates the theoretical study of sample-efficient learning of the Stackelberg equilibrium, in the bandit feedback setting where we only observe noisy samples of the reward. We consider three representative two-player general-sum games: bandit games, bandit-reinforcement learning (bandit-RL) games, and linear bandit games. In all these games, we identify a fundamental gap between the exact value of the Stackelberg equilibrium and its estimated version using finitely many noisy samples, which can not be closed information-theoretically regardless of the algorithm. We then establish sharp positive results on sample-efficient learning of Stackelberg equilibrium with value optimal up to the gap identified above, with matching lower bounds in the dependency on the gap, error tolerance, and the size of the action spaces. Overall, our results unveil unique challenges in learning Stackelberg equilibria under noisy bandit feedback, which we hope could shed light on future research on this topic.

Author Information

Yu Bai (Salesforce Research)
Chi Jin (University of California, Berkeley)
Huan Wang (Salesforce Research)

Huan Wang is an senior research scientist at Salesforce Research. His research interests include machine learning, big data analytics, computer vision and NLP. He used to be a research scientist at Microsoft AI Research, Yahoo’s New York Labs, and an adjunct professor at the engineering school of New York University. He graduated as a Ph.D in Computer Science at Yale University in 2013. Before that, he received an M.Phil. from The Chinese University of Hong Kong and a B.Eng. from Zhejiang University, both in information engineering.

Caiming Xiong (State Univerisity of New York at Buffalo)

More from the Same Authors