Timezone: »
A fundamental challenge in imperfect-information games is that states do not have well-defined values. As a result, depth-limited search algorithms used in single-agent settings and perfect-information games do not apply. This paper introduces a principled way to conduct depth-limited solving in imperfect-information games by allowing the opponent to choose among a number of strategies for the remainder of the game at the depth limit. Each one of these strategies results in a different set of values for leaf nodes. This forces an agent to be robust to the different strategies an opponent may employ. We demonstrate the effectiveness of this approach by building a master-level heads-up no-limit Texas hold'em poker AI that defeats two prior top agents using only a 4-core CPU and 16 GB of memory. Developing such a powerful agent would have previously required a supercomputer.
Author Information
Noam Brown (Carnegie Mellon University)
Tuomas Sandholm (Carnegie Mellon University)
Brandon Amos (Carnegie Mellon University)
More from the Same Authors
-
2021 Spotlight: Subgame solving without common knowledge »
Brian Zhang · Tuomas Sandholm -
2021 Spotlight: Sample Complexity of Tree Search Configuration: Cutting Planes and Beyond »
Maria-Florina Balcan · Siddharth Prasad · Tuomas Sandholm · Ellen Vitercik -
2021 : Cross-Domain Imitation Learning via Optimal Transport »
Arnaud Fickinger · Samuel Cohen · Stuart Russell · Brandon Amos -
2021 : Imitation Learning from Pixel Observations for Continuous Control »
Samuel Cohen · Brandon Amos · Marc Deisenroth · Mikael Henaff · Eugene Vinitsky · Denis Yarats -
2021 : Input Convex Gradient Networks »
Jack Richter-Powell · Jonathan Lorraine · Brandon Amos -
2021 : Input Convex Gradient Networks »
Jack Richter-Powell · Jonathan Lorraine · Brandon Amos -
2021 : Sliced Multi-Marginal Optimal Transport »
Samuel Cohen · Alexander Terenin · Yannik Pitcan · Brandon Amos · Marc Deisenroth · Senanayak Sesh Kumar Karri -
2021 : A Fine-Tuning Approach to Belief State Modeling »
Samuel Sokota · Hengyuan Hu · David Wu · Jakob Foerster · Noam Brown -
2021 Workshop: Cooperative AI »
Natasha Jaques · Edward Hughes · Jakob Foerster · Noam Brown · Kalesha Bullard · Charlotte Smith -
2021 Poster: Subgame solving without common knowledge »
Brian Zhang · Tuomas Sandholm -
2021 Poster: Scalable Online Planning via Reinforcement Learning Fine-Tuning »
Arnaud Fickinger · Hengyuan Hu · Brandon Amos · Stuart Russell · Noam Brown -
2021 Poster: Equilibrium Refinement for the Age of Machines: The One-Sided Quasi-Perfect Equilibrium »
Gabriele Farina · Tuomas Sandholm -
2021 Poster: No-Press Diplomacy from Scratch »
Anton Bakhtin · David Wu · Adam Lerer · Noam Brown -
2021 Poster: Sample Complexity of Tree Search Configuration: Cutting Planes and Beyond »
Maria-Florina Balcan · Siddharth Prasad · Tuomas Sandholm · Ellen Vitercik -
2020 : Deep Riemannian Manifold Learning »
Aaron Lou · Maximilian Nickel · Brandon Amos -
2020 Poster: Combining Deep Reinforcement Learning and Search for Imperfect-Information Games »
Noam Brown · Anton Bakhtin · Adam Lerer · Qucheng Gong -
2020 Poster: Small Nash Equilibrium Certificates in Very Large Games »
Brian Zhang · Tuomas Sandholm -
2020 Poster: Polynomial-Time Computation of Optimal Correlated Equilibria in Two-Player Extensive-Form Games with Public Chance Moves and Beyond »
Gabriele Farina · Tuomas Sandholm -
2020 Poster: Improving Policy-Constrained Kidney Exchange via Pre-Screening »
Duncan McElfresh · Michael Curry · Tuomas Sandholm · John Dickerson -
2019 Poster: Correlation in Extensive-Form Games: Saddle-Point Formulation and Benchmarks »
Gabriele Farina · Chun Kai Ling · Fei Fang · Tuomas Sandholm -
2019 Poster: Efficient Regret Minimization Algorithm for Extensive-Form Correlated Equilibrium »
Gabriele Farina · Chun Kai Ling · Fei Fang · Tuomas Sandholm -
2019 Spotlight: Efficient Regret Minimization Algorithm for Extensive-Form Correlated Equilibrium »
Gabriele Farina · Chun Kai Ling · Fei Fang · Tuomas Sandholm -
2019 Poster: Optimistic Regret Minimization for Extensive-Form Games via Dilated Distance-Generating Functions »
Gabriele Farina · Christian Kroer · Tuomas Sandholm -
2018 Poster: Differentiable MPC for End-to-end Planning and Control »
Brandon Amos · Ivan Jimenez · Jacob I Sacks · Byron Boots · J. Zico Kolter -
2018 Poster: A Unified Framework for Extensive-Form Game Abstraction with Bounds »
Christian Kroer · Tuomas Sandholm -
2018 Poster: Solving Large Sequential Games with the Excessive Gap Technique »
Christian Kroer · Gabriele Farina · Tuomas Sandholm -
2018 Poster: Practical exact algorithm for trembling-hand equilibrium refinements in games »
Gabriele Farina · Nicola Gatti · Tuomas Sandholm -
2018 Spotlight: Solving Large Sequential Games with the Excessive Gap Technique »
Christian Kroer · Gabriele Farina · Tuomas Sandholm -
2018 Poster: Ex ante coordination and collusion in zero-sum multi-player extensive-form games »
Gabriele Farina · Andrea Celli · Nicola Gatti · Tuomas Sandholm -
2017 Demonstration: Libratus: Beating Top Humans in No-Limit Poker »
Noam Brown · Tuomas Sandholm -
2017 Poster: Safe and Nested Subgame Solving for Imperfect-Information Games »
Noam Brown · Tuomas Sandholm -
2017 Oral: Safe and Nested Subgame Solving for Imperfect-Information Games »
Noam Brown · Tuomas Sandholm -
2017 Poster: Task-based End-to-end Model Learning in Stochastic Optimization »
Priya Donti · J. Zico Kolter · Brandon Amos -
2016 Poster: Sample Complexity of Automated Mechanism Design »
Maria-Florina Balcan · Tuomas Sandholm · Ellen Vitercik -
2015 Poster: Regret-Based Pruning in Extensive-Form Games »
Noam Brown · Tuomas Sandholm -
2015 Demonstration: Claudico: The World's Strongest No-Limit Texas Hold'em Poker AI »
Noam Brown · Tuomas Sandholm -
2014 Poster: Diverse Randomized Agents Vote to Win »
Albert Jiang · Leandro Soriano Marcolino · Ariel Procaccia · Tuomas Sandholm · Nisarg Shah · Milind Tambe