Workshop: Cooperative AI
Thore Graepel, Dario Amodei, Vincent Conitzer, Allan Dafoe, Gillian Hadfield, Eric Horvitz, Sarit Kraus, Kate Larson, Yoram Bachrach
Sat, Dec 12th, 2020 @ 13:20 – 20:55 GMT
Abstract: https://www.CooperativeAI.com/
Problems of cooperation—in which agents seek ways to jointly improve their welfare—are ubiquitous and important. They can be found at all scales ranging from our daily routines—such as highway driving, communication via shared language, division of labor, and work collaborations—to our global challenges—such as disarmament, climate change, global commerce, and pandemic preparedness. Arguably, the success of the human species is rooted in our ability to cooperate, in our social intelligence and skills. Since machines powered by artificial intelligence and machine learning are playing an ever greater role in our lives, it will be important to equip them with the skills necessary to cooperate and to foster cooperation.
We see an opportunity for the field of AI, and particularly machine learning, to explicitly focus effort on this class of problems which we term Cooperative AI. The goal of this research would be to study the many aspects of the problem of cooperation, and innovate in AI to contribute to solving these problems. Central questions include how to build machine agents with the capabilities needed for cooperation, and how advances in machine learning can help foster cooperation in populations of agents (of machines and/or humans), such as through improved mechanism design and mediation.
Research could be organized around key capabilities necessary for cooperation, including: understanding other agents, communicating with other agents, constructing cooperative commitments, and devising and negotiating suitable bargains and institutions. Since artificial agents will often act on behalf of particular humans and in ways that are consequential for humans, this research will need to consider how machines can adequately learn human preferences, and how best to integrate human norms and ethics into cooperative arrangements.
We are planning to bring together scholars from diverse backgrounds to discuss how AI research can contribute to the field of cooperation.
Call for Papers
We invite high-quality paper submissions on the following topics (broadly construed, this is not an exhaustive list):
-Multi-agent learning
-Agent cooperation
-Agent communication
-Resolving commitment problems
-Agent societies, organizations and institutions
-Trust and reputation
-Theory of mind and peer modelling
-Markets, mechanism design and and economics based cooperation
-Negotiation and bargaining agents
-Team formation problems
Accepted papers will be presented during joint virtual poster sessions and be made publicly available as non archival reports, allowing future submissions to archival conferences or journals.
Submissions should be up to eight pages excluding references, acknowledgements, and supplementary material, and should follow NeurIPS format. The review process will be double-blind.
Paper submissions: https://easychair.org/my/conference?conf=coopai2020#
Problems of cooperation—in which agents seek ways to jointly improve their welfare—are ubiquitous and important. They can be found at all scales ranging from our daily routines—such as highway driving, communication via shared language, division of labor, and work collaborations—to our global challenges—such as disarmament, climate change, global commerce, and pandemic preparedness. Arguably, the success of the human species is rooted in our ability to cooperate, in our social intelligence and skills. Since machines powered by artificial intelligence and machine learning are playing an ever greater role in our lives, it will be important to equip them with the skills necessary to cooperate and to foster cooperation.
We see an opportunity for the field of AI, and particularly machine learning, to explicitly focus effort on this class of problems which we term Cooperative AI. The goal of this research would be to study the many aspects of the problem of cooperation, and innovate in AI to contribute to solving these problems. Central questions include how to build machine agents with the capabilities needed for cooperation, and how advances in machine learning can help foster cooperation in populations of agents (of machines and/or humans), such as through improved mechanism design and mediation.
Research could be organized around key capabilities necessary for cooperation, including: understanding other agents, communicating with other agents, constructing cooperative commitments, and devising and negotiating suitable bargains and institutions. Since artificial agents will often act on behalf of particular humans and in ways that are consequential for humans, this research will need to consider how machines can adequately learn human preferences, and how best to integrate human norms and ethics into cooperative arrangements.
We are planning to bring together scholars from diverse backgrounds to discuss how AI research can contribute to the field of cooperation.
Call for Papers
We invite high-quality paper submissions on the following topics (broadly construed, this is not an exhaustive list):
-Multi-agent learning
-Agent cooperation
-Agent communication
-Resolving commitment problems
-Agent societies, organizations and institutions
-Trust and reputation
-Theory of mind and peer modelling
-Markets, mechanism design and and economics based cooperation
-Negotiation and bargaining agents
-Team formation problems
Accepted papers will be presented during joint virtual poster sessions and be made publicly available as non archival reports, allowing future submissions to archival conferences or journals.
Submissions should be up to eight pages excluding references, acknowledgements, and supplementary material, and should follow NeurIPS format. The review process will be double-blind.
Paper submissions: https://easychair.org/my/conference?conf=coopai2020#
Chat
To ask questions please use rocketchat, available only upon registration and login.
Schedule
13:20 – 13:30 GMT
Welcome: Yoram Bachrach (DeepMind) and Gillian Hadfield (University of Toronto)
Yoram Bachrach, Gillian Hadfield
13:30 – 14:00 GMT
Open Problems in Cooperative AI: Thore Graepel (DeepMind) and Allan Dafoe (University of Oxford)
Thore Graepel, Allan Dafoe
14:00 – 14:30 GMT
Invited Speaker: Peter Stone (The University of Texas at Austin) on Ad Hoc Autonomous Agent Teams: Collaboration without Pre-Coordination
Peter Stone
14:30 – 15:00 GMT
Invited Speaker: Gillian Hadfield (University of Toronto) on The Normative Infrastructure of Cooperation
Gillian Hadfield
15:00 – 15:30 GMT
Invited Speaker: James Fearon (Stanford University) on Two Kinds of Cooperative AI Challenges: Game Play and Game Design
James Fearon
15:30 – 16:00 GMT
Invited Speaker: Sarit Kraus (Bar-Ilan University) on Agent-Human Collaboration and Learning for Improving Human Satisfaction
Sarit Kraus
16:00 – 16:30 GMT
Invited Speaker: William Isaac (DeepMind) on Can Cooperation make AI (and Society) Fairer?
William Isaac
16:30 – 16:45 GMT
Q&A: Open Problems in Cooperative AI with Thore Graepel (DeepMind), Allan Dafoe (University of Oxford), Yoram Bachrach (DeepMind), and Natasha Jaques (Google) [moderator]
Thore Graepel, Yoram Bachrach, Allan Dafoe, Natasha Jaques
16:45 – 17:00 GMT
Q&A: Gillian Hadfield (University of Toronto): The Normative Infrastructure of Cooperation, with Natasha Jaques (Google) [moderator]
Gillian Hadfield, Natasha Jaques
17:00 – 17:15 GMT
Q&A: William Isaac (DeepMind): Can Cooperative Make AI (and Society) Fairer?, with Natasha Jaques (Google) [moderator]
William Isaac, Natasha Jaques
17:15 – 17:30 GMT
Q&A: Peter Stone (The University of Texas at Austin): Ad Hoc Autonomous Agent Teams: Collaboration without Pre-Coordination, with Natasha Jaques (Google) [moderator]
Peter Stone, Natasha Jaques
17:30 – 17:45 GMT
Q&A: Sarit Kraus (Bar-Ilan University): Agent-Human Collaboration and Learning for Improving Human Satisfaction, with Natasha Jaques (Google) [moderator]
Sarit Kraus, Natasha Jaques
17:45 – 18:00 GMT
Q&A: James Fearon (Stanford University): Cooperation Inside and Over the Rules of the Game, with Natasha Jaques (Google) [moderator]
James Fearon, Natasha Jaques
18:00 – 19:00 GMT
Poster Sessions (hosted in GatherTown)
19:00 – 19:45 GMT
Panel: Kate Larson (DeepMind) [moderator], Natasha Jaques (Google), Jeffrey Rosenschein (The Hebrew University of Jerusalem), Michael Wooldridge (University of Oxford)
Kate Larson, Natasha Jaques, Jeff S Rosenschein, Michael Wooldridge
19:45 – 20:00 GMT
Spotlight Talk: Too many cooks: Bayesian inference for coordinating multi-agent collaboration
Rose Wang
20:00 – 20:15 GMT
Spotlight Talk: Learning Social Learning
Kamal Ndousse
20:15 – 20:30 GMT
Spotlight Talk: Benefits of Assistance over Reward Learning
Rohin Shah
20:30 – 20:45 GMT
Spotlight Talk: Watch-And-Help: A Challenge for Social Perception and Human-AI Collaboration
Xavier Puig
20:45 – 20:55 GMT
Closing Remarks: Eric Horvitz (Microsoft)
Eric Horvitz