Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Sat Dec 12 05:20 AM -- 12:55 PM (PST)
Cooperative AI
Thore Graepel · Dario Amodei · Vincent Conitzer · Allan Dafoe · Gillian Hadfield · Eric Horvitz · Sarit Kraus · Kate Larson · Yoram Bachrach





Workshop Home Page

https://www.CooperativeAI.com/

Problems of cooperation—in which agents seek ways to jointly improve their welfare—are ubiquitous and important. They can be found at all scales ranging from our daily routines—such as highway driving, communication via shared language, division of labor, and work collaborations—to our global challenges—such as disarmament, climate change, global commerce, and pandemic preparedness. Arguably, the success of the human species is rooted in our ability to cooperate, in our social intelligence and skills. Since machines powered by artificial intelligence and machine learning are playing an ever greater role in our lives, it will be important to equip them with the skills necessary to cooperate and to foster cooperation.

We see an opportunity for the field of AI, and particularly machine learning, to explicitly focus effort on this class of problems which we term Cooperative AI. The goal of this research would be to study the many aspects of the problem of cooperation, and innovate in AI to contribute to solving these problems. Central questions include how to build machine agents with the capabilities needed for cooperation, and how advances in machine learning can help foster cooperation in populations of agents (of machines and/or humans), such as through improved mechanism design and mediation.

Research could be organized around key capabilities necessary for cooperation, including: understanding other agents, communicating with other agents, constructing cooperative commitments, and devising and negotiating suitable bargains and institutions. Since artificial agents will often act on behalf of particular humans and in ways that are consequential for humans, this research will need to consider how machines can adequately learn human preferences, and how best to integrate human norms and ethics into cooperative arrangements.

We are planning to bring together scholars from diverse backgrounds to discuss how AI research can contribute to the field of cooperation.


Call for Papers
We invite high-quality paper submissions on the following topics (broadly construed, this is not an exhaustive list):

-Multi-agent learning
-Agent cooperation
-Agent communication
-Resolving commitment problems
-Agent societies, organizations and institutions
-Trust and reputation
-Theory of mind and peer modelling
-Markets, mechanism design and and economics based cooperation
-Negotiation and bargaining agents
-Team formation problems

Accepted papers will be presented during joint virtual poster sessions and be made publicly available as non archival reports, allowing future submissions to archival conferences or journals.

Submissions should be up to eight pages excluding references, acknowledgements, and supplementary material, and should follow NeurIPS format. The review process will be double-blind.

Paper submissions: https://easychair.org/my/conference?conf=coopai2020#

Welcome: Yoram Bachrach (DeepMind) and Gillian Hadfield (University of Toronto) (Opening Talk)
Open Problems in Cooperative AI: Thore Graepel (DeepMind) and Allan Dafoe (University of Oxford) (Opening Talk)
Invited Speaker: Peter Stone (The University of Texas at Austin) on Ad Hoc Autonomous Agent Teams: Collaboration without Pre-Coordination (Invited Talk)
Invited Speaker: Gillian Hadfield (University of Toronto) on The Normative Infrastructure of Cooperation (Keynote Talk)
Invited Speaker: James Fearon (Stanford University) on Two Kinds of Cooperative AI Challenges: Game Play and Game Design (Keynote Talk)
Invited Speaker: Sarit Kraus (Bar-Ilan University) on Agent-Human Collaboration and Learning for Improving Human Satisfaction (Invited Talk)
Invited Speaker: William Isaac (DeepMind) on Can Cooperation make AI (and Society) Fairer? (Keynote Talk)
Q&A: Open Problems in Cooperative AI with Thore Graepel (DeepMind), Allan Dafoe (University of Oxford), Yoram Bachrach (DeepMind), and Natasha Jaques (Google) [moderator] (Q&A)
Q&A: Gillian Hadfield (University of Toronto): The Normative Infrastructure of Cooperation, with Natasha Jaques (Google) [moderator] (Q&A)
Q&A: William Isaac (DeepMind): Can Cooperative Make AI (and Society) Fairer?, with Natasha Jaques (Google) [moderator] (Q&A)
Q&A: Peter Stone (The University of Texas at Austin): Ad Hoc Autonomous Agent Teams: Collaboration without Pre-Coordination, with Natasha Jaques (Google) [moderator] (Q&A)
Q&A: Sarit Kraus (Bar-Ilan University): Agent-Human Collaboration and Learning for Improving Human Satisfaction, with Natasha Jaques (Google) [moderator] (Q&A)
Q&A: James Fearon (Stanford University): Cooperation Inside and Over the Rules of the Game, with Natasha Jaques (Google) [moderator] (Q&A)
Poster Sessions (hosted in GatherTown) (Poster Sessions)
Panel: Kate Larson (DeepMind) [moderator], Natasha Jaques (Google), Jeffrey Rosenschein (The Hebrew University of Jerusalem), Michael Wooldridge (University of Oxford) (Discussion Panel)
Spotlight Talk: Too many cooks: Bayesian inference for coordinating multi-agent collaboration (Spotlight Talk)
Spotlight Talk: Learning Social Learning (Spotlight Talk)
Spotlight Talk: Benefits of Assistance over Reward Learning (Spotlight Talk)
Spotlight Talk: Watch-And-Help: A Challenge for Social Perception and Human-AI Collaboration (Spotlight Talk)
Closing Remarks: Eric Horvitz (Microsoft) (Closing Remarks)