Timezone: »
In two-player zero-sum extensive-form games, Nash equilibrium prescribes optimal strategies against perfectly rational opponents. However, it does not guarantee rational play in parts of the game tree that can only be reached by the players making mistakes. This can be problematic when operationalizing equilibria in the real world among imperfect players. Trembling-hand refinements are a sound remedy to this issue, and are subsets of Nash equilibria that are designed to handle the possibility that any of the players may make mistakes. In this paper, we initiate the study of equilibrium refinements for settings where one of the players is perfectly rational (the ``machine'') and the other may make mistakes. As we show, this endeavor has many pitfalls: many intuitively appealing approaches to refinement fail in various ways. On the positive side, we introduce a modification of the classical quasi-perfect equilibrium (QPE) refinement, which we call the one-sided quasi-perfect equilibrium. Unlike QPE, one-sided QPE only accounts for mistakes from one player and assumes that no mistakes will be made by the machine. We present experiments on standard benchmark games and an endgame from the famous man-machine match where the AI Libratus was the first to beat top human specialist professionals in heads-up no-limit Texas hold'em poker. We show that one-sided QPE can be computed more efficiently than all known prior refinements, paving the way to wider adoption of Nash equilibrium refinements in settings with perfectly rational machines (or humans perfectly actuating machine-generated strategies) that interact with players prone to mistakes. We also show that one-sided QPE tends to play better than a Nash equilibrium strategy against imperfect opponents.
Author Information
Gabriele Farina (Carnegie Mellon University)
Tuomas Sandholm (CMU, Strategic Machine, Strategy Robot, Optimized Markets)
More from the Same Authors
-
2021 Spotlight: Subgame solving without common knowledge »
Brian Zhang · Tuomas Sandholm -
2021 Spotlight: Sample Complexity of Tree Search Configuration: Cutting Planes and Beyond »
Maria-Florina Balcan · Siddharth Prasad · Tuomas Sandholm · Ellen Vitercik -
2022 : Clairvoyant Regret Minimization: Equivalence with Nemirovski’s Conceptual Prox Method and Extension to General Convex Games »
Gabriele Farina · Christian Kroer · Chung-Wei Lee · Haipeng Luo -
2022 : Mastering the Game of No-Press Diplomacy via Human-Regularized Reinforcement Learning and Planning »
Anton Bakhtin · David Wu · Adam Lerer · Jonathan Gray · Athul Jacob · Gabriele Farina · Alexander Miller · Noam Brown -
2023 Poster: Computing Optimal Equilibria and Mechanisms via Learning in Zero-Sum Extensive-Form Games »
Brian Zhang · Gabriele Farina · Ioannis Anagnostides · Federico Cacciamani · Stephen McAleer · Andreas Haupt · Andrea Celli · Nicola Gatti · Vincent Conitzer · Tuomas Sandholm -
2023 Poster: Regret Matching$^+$: (In)Stability and Fast Convergence in Games »
Gabriele Farina · Julien Grand-Clément · Christian Kroer · Chung-Wei Lee · Haipeng Luo -
2023 Poster: On the Convergence and Welfare of Learning Algorithms in Smooth Games »
Ioannis Anagnostides · Tuomas Sandholm -
2023 Poster: Team-PSRO for Learning Approximate TMECor in Large Team Games via Cooperative Reinforcement Learning »
Stephen McAleer · Gabriele Farina · Gaoyue Zhou · Mingzhi Wang · Yaodong Yang · Tuomas Sandholm -
2023 Poster: Polynomial-Time Linear-Swap Regret Minimization in Imperfect-Information Sequential Games »
Gabriele Farina · Charilaos Pipis -
2023 Poster: Bicriteria Multidimensional Mechanism Design with Side Information »
Siddharth Prasad · Maria-Florina Balcan · Tuomas Sandholm -
2023 Poster: On the Convergence of No-Regret Learning Dynamics in Time-Varying Games »
Ioannis Anagnostides · Ioannis Panageas · Gabriele Farina · Tuomas Sandholm -
2022 : ESCHER: ESCHEWING IMPORTANCE SAMPLING IN GAMES BY COMPUTING A HISTORY VALUE FUNCTION TO ESTIMATE REGRET »
Stephen McAleer · Gabriele Farina · Marc Lanctot · Tuomas Sandholm -
2022 : Poster Session 2 »
Jinwuk Seok · Bo Liu · Ryotaro Mitsuboshi · David Martinez-Rubio · Weiqiang Zheng · Ilgee Hong · Chen Fan · Kazusato Oko · Bo Tang · Miao Cheng · Aaron Defazio · Tim G. J. Rudner · Gabriele Farina · Vishwak Srinivasan · Ruichen Jiang · Peng Wang · Jane Lee · Nathan Wycoff · Nikhil Ghosh · Yinbin Han · David Mueller · Liu Yang · Amrutha Varshini Ramesh · Siqi Zhang · Kaifeng Lyu · David Yunis · Kumar Kshitij Patel · Fangshuo Liao · Dmitrii Avdiukhin · Xiang Li · Sattar Vakili · Jiaxin Shi -
2022 Poster: Structural Analysis of Branch-and-Cut and the Learnability of Gomory Mixed Integer Cuts »
Maria-Florina Balcan · Siddharth Prasad · Tuomas Sandholm · Ellen Vitercik -
2022 Poster: Uncoupled Learning Dynamics with $O(\log T)$ Swap Regret in Multiplayer Games »
Ioannis Anagnostides · Gabriele Farina · Christian Kroer · Chung-Wei Lee · Haipeng Luo · Tuomas Sandholm -
2022 Poster: Maximizing Revenue under Market Shrinkage and Market Uncertainty »
Maria-Florina Balcan · Siddharth Prasad · Tuomas Sandholm -
2022 Poster: Polynomial-Time Optimal Equilibria with a Mediator in Extensive-Form Games »
Brian Zhang · Tuomas Sandholm -
2022 Poster: Optimistic Mirror Descent Either Converges to Nash or to Strong Coarse Correlated Equilibria in Bimatrix Games »
Ioannis Anagnostides · Gabriele Farina · Ioannis Panageas · Tuomas Sandholm -
2022 Poster: Subgame Solving in Adversarial Team Games »
Brian Zhang · Luca Carminati · Federico Cacciamani · Gabriele Farina · Pierriccardo Olivieri · Nicola Gatti · Tuomas Sandholm -
2022 Poster: Near-Optimal No-Regret Learning Dynamics for General Convex Games »
Gabriele Farina · Ioannis Anagnostides · Haipeng Luo · Chung-Wei Lee · Christian Kroer · Tuomas Sandholm -
2022 Expo Demonstration: Human Modeling and Strategic Reasoning in the Game of Diplomacy »
Noam Brown · Alexander Miller · Gabriele Farina -
2021 Poster: Subgame solving without common knowledge »
Brian Zhang · Tuomas Sandholm -
2021 Poster: Sample Complexity of Tree Search Configuration: Cutting Planes and Beyond »
Maria-Florina Balcan · Siddharth Prasad · Tuomas Sandholm · Ellen Vitercik -
2020 Poster: Small Nash Equilibrium Certificates in Very Large Games »
Brian Zhang · Tuomas Sandholm -
2020 Poster: Polynomial-Time Computation of Optimal Correlated Equilibria in Two-Player Extensive-Form Games with Public Chance Moves and Beyond »
Gabriele Farina · Tuomas Sandholm -
2020 Poster: No-Regret Learning Dynamics for Extensive-Form Correlated Equilibrium »
Andrea Celli · Alberto Marchesi · Gabriele Farina · Nicola Gatti -
2020 Oral: No-Regret Learning Dynamics for Extensive-Form Correlated Equilibrium »
Andrea Celli · Alberto Marchesi · Gabriele Farina · Nicola Gatti -
2020 Poster: Improving Policy-Constrained Kidney Exchange via Pre-Screening »
Duncan McElfresh · Michael Curry · Tuomas Sandholm · John Dickerson -
2019 Poster: Correlation in Extensive-Form Games: Saddle-Point Formulation and Benchmarks »
Gabriele Farina · Chun Kai Ling · Fei Fang · Tuomas Sandholm -
2019 Poster: Efficient Regret Minimization Algorithm for Extensive-Form Correlated Equilibrium »
Gabriele Farina · Chun Kai Ling · Fei Fang · Tuomas Sandholm -
2019 Spotlight: Efficient Regret Minimization Algorithm for Extensive-Form Correlated Equilibrium »
Gabriele Farina · Chun Kai Ling · Fei Fang · Tuomas Sandholm -
2019 Poster: Optimistic Regret Minimization for Extensive-Form Games via Dilated Distance-Generating Functions »
Gabriele Farina · Christian Kroer · Tuomas Sandholm -
2018 : Regret Decomposition in Sequential Games with Convex Action Spaces and Losses »
Gabriele Farina -
2018 Poster: A Unified Framework for Extensive-Form Game Abstraction with Bounds »
Christian Kroer · Tuomas Sandholm -
2018 Poster: Depth-Limited Solving for Imperfect-Information Games »
Noam Brown · Tuomas Sandholm · Brandon Amos -
2018 Poster: Solving Large Sequential Games with the Excessive Gap Technique »
Christian Kroer · Gabriele Farina · Tuomas Sandholm -
2018 Poster: Practical exact algorithm for trembling-hand equilibrium refinements in games »
Gabriele Farina · Nicola Gatti · Tuomas Sandholm -
2018 Spotlight: Solving Large Sequential Games with the Excessive Gap Technique »
Christian Kroer · Gabriele Farina · Tuomas Sandholm -
2018 Poster: Ex ante coordination and collusion in zero-sum multi-player extensive-form games »
Gabriele Farina · Andrea Celli · Nicola Gatti · Tuomas Sandholm -
2017 Demonstration: Libratus: Beating Top Humans in No-Limit Poker »
Noam Brown · Tuomas Sandholm -
2017 Poster: Safe and Nested Subgame Solving for Imperfect-Information Games »
Noam Brown · Tuomas Sandholm -
2017 Oral: Safe and Nested Subgame Solving for Imperfect-Information Games »
Noam Brown · Tuomas Sandholm -
2016 Poster: Sample Complexity of Automated Mechanism Design »
Maria-Florina Balcan · Tuomas Sandholm · Ellen Vitercik -
2015 Poster: Regret-Based Pruning in Extensive-Form Games »
Noam Brown · Tuomas Sandholm -
2015 Demonstration: Claudico: The World's Strongest No-Limit Texas Hold'em Poker AI »
Noam Brown · Tuomas Sandholm -
2014 Poster: Diverse Randomized Agents Vote to Win »
Albert Jiang · Leandro Soriano Marcolino · Ariel Procaccia · Tuomas Sandholm · Nisarg Shah · Milind Tambe