Timezone: »
Humanity faces numerous problems of common-pool resource appropriation. This class of multi-agent social dilemma includes the problems of ensuring sustainable use of fresh water, common fisheries, grazing pastures, and irrigation systems. Abstract models of common-pool resource appropriation based on non-cooperative game theory predict that self-interested agents will generally fail to find socially positive equilibria---a phenomenon called the tragedy of the commons. However, in reality, human societies are sometimes able to discover and implement stable cooperative solutions. Decades of behavioral game theory research have sought to uncover aspects of human behavior that make this possible. Most of that work was based on laboratory experiments where participants only make a single choice: how much to appropriate. Recognizing the importance of spatial and temporal resource dynamics, a recent trend has been toward experiments in more complex real-time video game-like environments. However, standard methods of non-cooperative game theory can no longer be used to generate predictions for this case. Here we show that deep reinforcement learning can be used instead. To that end, we study the emergent behavior of groups of independently learning agents in a partially observed Markov game modeling common-pool resource appropriation. Our experiments highlight the importance of trial-and-error learning in common-pool resource appropriation and shed light on the relationship between exclusion, sustainability, and inequality.
Author Information
Julien Pérolat (CNRS)
Joel Leibo (DeepMind)
Vinicius Zambaldi (Deepmind)
Charles Beattie (DeepMind)
Karl Tuyls (University of Liverpool)
Thore Graepel (DeepMind)
More from the Same Authors
-
2021 : Inferring a Continuous Distribution of Atom Coordinates from Cryo-EM Images using VAEs »
Dan Rosenbaum · Marta Garnelo · Michal Zielinski · Charles Beattie · Ellen Clancy · Andrea Huber · Pushmeet Kohli · Andrew Senior · John Jumper · Carl Doersch · S. M. Ali Eslami · Olaf Ronneberger · Jonas Adler -
2021 : Hidden Agenda: a Social Deduction Game with Diverse Learned Equilibria »
Kavya Kopparapu · Edgar Dueñez-Guzman · Jayd Matyas · Alexander Vezhnevets · John Agapiou · Kevin McKee · Richard Everett · Janusz Marecki · Joel Leibo · Thore Graepel -
2023 Competition: Melting Pot Contest »
Rakshit Trivedi · Akbir Khan · Jesse Clifton · Lewis Hammond · John Agapiou · Edgar Dueñez-Guzman · Jayd Matyas · Dylan Hadfield-Menell · Joel Leibo -
2021 : Inferring a Continuous Distribution of Atom Coordinates from Cryo-EM Images using VAEs »
Dan Rosenbaum · Marta Garnelo · Michal Zielinski · Charles Beattie · Ellen Clancy · Andrea Huber · Pushmeet Kohli · Andrew Senior · John Jumper · Carl Doersch · S. M. Ali Eslami · Olaf Ronneberger · Jonas Adler -
2020 : Q&A: Open Problems in Cooperative AI with Thore Graepel (DeepMind), Allan Dafoe (University of Oxford), Yoram Bachrach (DeepMind), and Natasha Jaques (Google) [moderator] »
Thore Graepel · Yoram Bachrach · Allan Dafoe · Natasha Jaques -
2020 : Open Problems in Cooperative AI: Thore Graepel (DeepMind) and Allan Dafoe (University of Oxford) »
Thore Graepel · Allan Dafoe -
2020 Workshop: Cooperative AI »
Thore Graepel · Dario Amodei · Vincent Conitzer · Allan Dafoe · Gillian Hadfield · Eric Horvitz · Sarit Kraus · Kate Larson · Yoram Bachrach -
2020 Poster: Learning to Play No-Press Diplomacy with Best Response Policy Iteration »
Thomas Anthony · Tom Eccles · Andrea Tacchetti · János Kramár · Ian Gemp · Thomas Hudson · Nicolas Porcel · Marc Lanctot · Julien Perolat · Richard Everett · Satinder Singh · Thore Graepel · Yoram Bachrach -
2020 Spotlight: Learning to Play No-Press Diplomacy with Best Response Policy Iteration »
Thomas Anthony · Tom Eccles · Andrea Tacchetti · János Kramár · Ian Gemp · Thomas Hudson · Nicolas Porcel · Marc Lanctot · Julien Perolat · Richard Everett · Satinder Singh · Thore Graepel · Yoram Bachrach -
2019 Poster: Generalization of Reinforcement Learners with Working and Episodic Memory »
Meire Fortunato · Melissa Tan · Ryan Faulkner · Steven Hansen · Adrià Puigdomènech Badia · Gavin Buttimore · Charles Deck · Joel Leibo · Charles Blundell -
2019 Poster: Biases for Emergent Communication in Multi-agent Reinforcement Learning »
Tom Eccles · Yoram Bachrach · Guy Lever · Angeliki Lazaridou · Thore Graepel -
2019 Poster: Interval timing in deep reinforcement learning agents »
Ben Deverett · Ryan Faulkner · Meire Fortunato · Gregory Wayne · Joel Leibo -
2018 Poster: Actor-Critic Policy Optimization in Partially Observable Multiagent Environments »
Sriram Srinivasan · Marc Lanctot · Vinicius Zambaldi · Julien Perolat · Karl Tuyls · Remi Munos · Michael Bowling -
2018 Poster: Inequity aversion improves cooperation in intertemporal social dilemmas »
Edward Hughes · Joel Leibo · Matthew Phillips · Karl Tuyls · Edgar Dueñez-Guzman · Antonio García Castañeda · Iain Dunning · Tina Zhu · Kevin McKee · Raphael Koster · Heather Roff · Thore Graepel -
2018 Poster: Re-evaluating evaluation »
David Balduzzi · Karl Tuyls · Julien Perolat · Thore Graepel -
2017 Poster: A Unified Game-Theoretic Approach to Multiagent Reinforcement Learning »
Marc Lanctot · Vinicius Zambaldi · Audrunas Gruslys · Angeliki Lazaridou · Karl Tuyls · Julien Perolat · David Silver · Thore Graepel -
2016 : Concluding Remarks »
Thore Graepel · Frans Oliehoek · Karl Tuyls -
2016 : A Study of Value Iteration with Non-Stationary Strategies in General Sum Markov Games »
Julien Pérolat -
2016 : Introduction »
Thore Graepel · Karl Tuyls · Frans Oliehoek -
2016 Workshop: Learning, Inference and Control of Multi-Agent Systems »
Thore Graepel · Marc Lanctot · Joel Leibo · Guy Lever · Janusz Marecki · Frans Oliehoek · Karl Tuyls · Vicky Holgate -
2016 Poster: Using Fast Weights to Attend to the Recent Past »
Jimmy Ba · Geoffrey E Hinton · Volodymyr Mnih · Joel Leibo · Catalin Ionescu -
2016 Oral: Using Fast Weights to Attend to the Recent Past »
Jimmy Ba · Geoffrey E Hinton · Volodymyr Mnih · Joel Leibo · Catalin Ionescu