Timezone: »
https://www.CooperativeAI.com/
Problems of cooperation—in which agents seek ways to jointly improve their welfare—are ubiquitous and important. They can be found at all scales ranging from our daily routines—such as highway driving, communication via shared language, division of labor, and work collaborations—to our global challenges—such as disarmament, climate change, global commerce, and pandemic preparedness. Arguably, the success of the human species is rooted in our ability to cooperate, in our social intelligence and skills. Since machines powered by artificial intelligence and machine learning are playing an ever greater role in our lives, it will be important to equip them with the skills necessary to cooperate and to foster cooperation.
We see an opportunity for the field of AI, and particularly machine learning, to explicitly focus effort on this class of problems which we term Cooperative AI. The goal of this research would be to study the many aspects of the problem of cooperation, and innovate in AI to contribute to solving these problems. Central questions include how to build machine agents with the capabilities needed for cooperation, and how advances in machine learning can help foster cooperation in populations of agents (of machines and/or humans), such as through improved mechanism design and mediation.
Research could be organized around key capabilities necessary for cooperation, including: understanding other agents, communicating with other agents, constructing cooperative commitments, and devising and negotiating suitable bargains and institutions. Since artificial agents will often act on behalf of particular humans and in ways that are consequential for humans, this research will need to consider how machines can adequately learn human preferences, and how best to integrate human norms and ethics into cooperative arrangements.
We are planning to bring together scholars from diverse backgrounds to discuss how AI research can contribute to the field of cooperation.
Call for Papers
We invite high-quality paper submissions on the following topics (broadly construed, this is not an exhaustive list):
-Multi-agent learning
-Agent cooperation
-Agent communication
-Resolving commitment problems
-Agent societies, organizations and institutions
-Trust and reputation
-Theory of mind and peer modelling
-Markets, mechanism design and and economics based cooperation
-Negotiation and bargaining agents
-Team formation problems
Accepted papers will be presented during joint virtual poster sessions and be made publicly available as non archival reports, allowing future submissions to archival conferences or journals.
Submissions should be up to eight pages excluding references, acknowledgements, and supplementary material, and should follow NeurIPS format. The review process will be double-blind.
Paper submissions: https://easychair.org/my/conference?conf=coopai2020#
Sat 5:20 a.m. - 5:30 a.m.
|
Welcome: Yoram Bachrach (DeepMind) and Gillian Hadfield (University of Toronto)
(
Opening Talk
)
SlidesLive Video » |
Yoram Bachrach · Gillian Hadfield 🔗 |
Sat 5:30 a.m. - 6:00 a.m.
|
Open Problems in Cooperative AI: Thore Graepel (DeepMind) and Allan Dafoe (University of Oxford)
(
Opening Talk
)
SlidesLive Video » |
Thore Graepel · Allan Dafoe 🔗 |
Sat 6:00 a.m. - 6:30 a.m.
|
Invited Speaker: Peter Stone (The University of Texas at Austin) on Ad Hoc Autonomous Agent Teams: Collaboration without Pre-Coordination
(
Invited Talk
)
SlidesLive Video » As autonomous agents proliferate in the real world, both in software and robotic settings, they will increasingly need to band together for cooperative activities with previously unfamiliar teammates. In such "ad hoc" team settings, team strategies cannot be developed a priori. Rather, an agent must be prepared to cooperate with many types of teammates: it must collaborate without pre-coordination. This talk will cover past and ongoing research on the challenge of building autonomous agents that are capable of robust ad hoc teamwork. |
Peter Stone 🔗 |
Sat 6:30 a.m. - 7:00 a.m.
|
Invited Speaker: Gillian Hadfield (University of Toronto) on The Normative Infrastructure of Cooperation
(
Keynote Talk
)
SlidesLive Video » In this talk, I will present the case for the critical role played by third-party enforced rules in the extensive forms of cooperation we see in humans. Cooperation, I’ll argue, cannot be adequately accounted for—or modeled for AI—within the framework of human preferences, coordination incentives or bilateral commitments and reciprocity alone. Cooperation is a group phenomenon and requires group infrastructure to maintain. This insight is critical for training AI agents that can cooperate with humans and, likely, other AI agents. Training environments need to be built with normative infrastructure that enables AI agents to learn and participate in cooperative activities—including the cooperative activity that undergirds all others: collective punishment of agents that violate community norms. |
Gillian Hadfield 🔗 |
Sat 7:00 a.m. - 7:30 a.m.
|
Invited Speaker: James Fearon (Stanford University) on Two Kinds of Cooperative AI Challenges: Game Play and Game Design
(
Keynote Talk
)
SlidesLive Video » Humans routinely face two types of cooperation problems: How to get to a collectively good outcome given some set of preferences and structural constraints; and how to design, shape, or shove structural constraints and preferences to induce agents to make choices that bring about better collective outcomes. In the terminology of economic theory, the first is a problem of equilibrium selection given a game structure, and the second is a problem of mechanism design by a “social planner.” These two types of problems have been distinguished in and are central to a much longer tradition of political philosophy (e.g., state of nature arguments). It is fairly clear how AI can and might be constructively applied to the first type of problem, while less clear for the second type. How to think about using AI to contribute to optimal design of the terms and parameters – the rules of a game – for other agents? Put differently, could there be an AI of constitutional design? |
James Fearon 🔗 |
Sat 7:30 a.m. - 8:00 a.m.
|
Invited Speaker: Sarit Kraus (Bar-Ilan University) on Agent-Human Collaboration and Learning for Improving Human Satisfaction
(
Invited Talk
)
SlidesLive Video » We consider environments where a set of human workers needs to handle a large set of tasks while interacting with human users. The arriving tasks vary: they may differ in their urgency, their difficulty and the required knowledge and time duration in which to perform them. Our goal is to decrease the number of workers, which we refer to as operators that are handling the tasks while increasing the users’ satisfaction. We present automated intelligent agents that will work together with the human operators in order to improve the overall performance of such systems and increase both operators' and users’ satisfaction. Examples include: home hospitalization environment where remote specialists will instruct and supervise treatments that are carried out at the patients' homes; operators that tele-operate autonomous vehicles when human intervention is needed and bankers that provide online service to customers. The automated agents could support the operators: the machine learning-based agent follows the operator’s work and makes recommendations, helping him interact proficiently with the users. The agents can also learn from the operators and eventually replace the operators in many of their tasks. |
Sarit Kraus 🔗 |
Sat 8:00 a.m. - 8:30 a.m.
|
Invited Speaker: William Isaac (DeepMind) on Can Cooperation make AI (and Society) Fairer?
(
Keynote Talk
)
SlidesLive Video » |
William Isaac 🔗 |
Sat 8:30 a.m. - 8:45 a.m.
|
Q&A: Open Problems in Cooperative AI with Thore Graepel (DeepMind), Allan Dafoe (University of Oxford), Yoram Bachrach (DeepMind), and Natasha Jaques (Google) [moderator]
(
Q&A
)
Participants can send questions via Sli.do using this link: https://app.sli.do/event/ambolxqi |
Thore Graepel · Yoram Bachrach · Allan Dafoe · Natasha Jaques 🔗 |
Sat 8:45 a.m. - 9:00 a.m.
|
Q&A: Gillian Hadfield (University of Toronto): The Normative Infrastructure of Cooperation, with Natasha Jaques (Google) [moderator]
(
Q&A
)
Participants can send questions via Sli.do using this link: https://app.sli.do/event/02lguhzy |
Gillian Hadfield · Natasha Jaques 🔗 |
Sat 9:00 a.m. - 9:15 a.m.
|
Q&A: William Isaac (DeepMind): Can Cooperative Make AI (and Society) Fairer?, with Natasha Jaques (Google) [moderator]
(
Q&A
)
Participants can send questions via Sli.do using this link: https://app.sli.do/event/riko0stp |
William Isaac · Natasha Jaques 🔗 |
Sat 9:15 a.m. - 9:30 a.m.
|
Q&A: Peter Stone (The University of Texas at Austin): Ad Hoc Autonomous Agent Teams: Collaboration without Pre-Coordination, with Natasha Jaques (Google) [moderator]
(
Q&A
)
Participants can send questions via Sli.do using this link: https://app.sli.do/event/50mlx6cq |
Peter Stone · Natasha Jaques 🔗 |
Sat 9:30 a.m. - 9:45 a.m.
|
Q&A: Sarit Kraus (Bar-Ilan University): Agent-Human Collaboration and Learning for Improving Human Satisfaction, with Natasha Jaques (Google) [moderator]
(
Q&A
)
Participants can send questions via Sli.do using this link: https://app.sli.do/event/9opzmndo |
Sarit Kraus · Natasha Jaques 🔗 |
Sat 9:45 a.m. - 10:00 a.m.
|
Q&A: James Fearon (Stanford University): Cooperation Inside and Over the Rules of the Game, with Natasha Jaques (Google) [moderator]
(
Q&A
)
Participants can send questions via Sli.do using this link: https://app.sli.do/event/uqh9pktn |
James Fearon · Natasha Jaques 🔗 |
Sat 10:00 a.m. - 11:00 a.m.
|
Poster Sessions (hosted in GatherTown)
(
Poster Sessions
)
link »
Gather Town link: [ protected link dropped ] /1l0kNMMpqLZvr9Co/CooperativeAI |
🔗 |
Sat 11:00 a.m. - 11:45 a.m.
|
Panel: Kate Larson (DeepMind) [moderator], Natasha Jaques (Google), Jeffrey Rosenschein (The Hebrew University of Jerusalem), Michael Wooldridge (University of Oxford)
(
Discussion Panel
)
SlidesLive Video » |
Kate Larson · Natasha Jaques · Jeffrey S Rosenschein · Michael Wooldridge 🔗 |
Sat 11:45 a.m. - 12:00 p.m.
|
Spotlight Talk: Too many cooks: Bayesian inference for coordinating multi-agent collaboration
(
Spotlight Talk
)
SlidesLive Video » Authors: Rose Wang, Sarah Wu, James Evans, Joshua Tenenbaum, David Parkes and Max Kleiman-Weiner |
Rose Wang 🔗 |
Sat 12:00 p.m. - 12:15 p.m.
|
Spotlight Talk: Learning Social Learning
(
Spotlight Talk
)
SlidesLive Video » Authors: Kamal Ndousse, Douglas Eck, Sergey Levine and Natasha Jaques |
Kamal Ndousse 🔗 |
Sat 12:15 p.m. - 12:30 p.m.
|
Spotlight Talk: Benefits of Assistance over Reward Learning
(
Spotlight Talk
)
SlidesLive Video » Authors: Rohin Shah, Pedro Freire, Neel Alex, Rachel Freedman, Dmitrii Krasheninnikov, Lawrence Chan, Michael Dennis, Pieter Abbeel, Anca Dragan and Stuart Russell |
Rohin Shah 🔗 |
Sat 12:30 p.m. - 12:45 p.m.
|
Spotlight Talk: Watch-And-Help: A Challenge for Social Perception and Human-AI Collaboration
(
Spotlight Talk
)
SlidesLive Video » Authors: Xavier Puig, Tianmin Shu, Shuang Li, Zilin Wang, Josh Tenenbaum, Sanja Fidler and Antonio Torralba |
Xavier Puig 🔗 |
Sat 12:45 p.m. - 12:55 p.m.
|
Closing Remarks: Eric Horvitz (Microsoft)
(
Closing Remarks
)
|
Eric Horvitz 🔗 |
Author Information
Thore Graepel (DeepMind)
Dario Amodei (OpenAI)
Vincent Conitzer (Duke University)
Vincent Conitzer is the Kimberly J. Jenkins University Professor of New Technologies and Professor of Computer Science, Professor of Economics, and Professor of Philosophy at Duke University. He received Ph.D. (2006) and M.S. (2003) degrees in Computer Science from Carnegie Mellon University, and an A.B. (2001) degree in Applied Mathematics from Harvard University. Conitzer works on artificial intelligence (AI). Much of his work has focused on AI and game theory, for example designing algorithms for the optimal strategic placement of defensive resources. More recently, he has started to work on AI and ethics: how should we determine the objectives that AI systems pursue, when these objectives have complex effects on various stakeholders? Conitzer has received the Social Choice and Welfare Prize, a Presidential Early Career Award for Scientists and Engineers (PECASE), the IJCAI Computers and Thought Award, an NSF CAREER award, the inaugural Victor Lesser dissertation award, an honorable mention for the ACM dissertation award, and several awards for papers and service at the AAAI and AAMAS conferences. He has also been named a Guggenheim Fellow, a Sloan Fellow, a Kavli Fellow, a Bass Fellow, an ACM Fellow, a AAAI Fellow, and one of AI's Ten to Watch. He has served as program and/or general chair of the AAAI, AAMAS, AIES, COMSOC, and EC conferences. Conitzer and Preston McAfee were the founding Editors-in-Chief of the ACM Transactions on Economics and Computation (TEAC).
Allan Dafoe (University of Oxford)
Gillian Hadfield (University of Toronto, Vector Institute, and OpenAI)
Eric Horvitz (Microsoft Research)
Sarit Kraus (Bar-Ilan University)
Kate Larson (DeepMind, University of Waterloo)
Yoram Bachrach (Google DeepMind)
More from the Same Authors
-
2021 : Normative disagreement as a challenge for Cooperative AI »
Julian Stastny · Maxime Riché · Aleksandr Lyzhov · Johannes Treutlein · Allan Dafoe · Jesse Clifton -
2021 : Bursting Scientific Filter Bubbles: Boosting Innovation via Novel Author Discovery »
Jason Portenoy · Jevin West · Eric Horvitz · Daniel Weld · Tom Hope -
2021 : A Search Engine for Discovery of Scientific Challenges and Directions »
Dan Lahav · Jon Saad-Falcon · Duen Horng Chau · Diyi Yang · Eric Horvitz · Daniel Weld · Tom Hope -
2021 : Normative disagreement as a challenge for Cooperative AI »
Julian Stastny · Maxime Riché · Aleksandr Lyzhov · Johannes Treutlein · Allan Dafoe · Jesse Clifton -
2021 : Hidden Agenda: a Social Deduction Game with Diverse Learned Equilibria »
Kavya Kopparapu · Edgar Dueñez-Guzman · Jayd Matyas · Alexander Vezhnevets · John Agapiou · Kevin McKee · Richard Everett · Janusz Marecki · Joel Leibo · Thore Graepel -
2021 : A taxonomy of strategic human interactions in traffic conflicts »
Atrisha Sarkar · Kate Larson · Krzysztof Czarnecki -
2021 : Normative disagreement as a challenge for Cooperative AI »
Julian Stastny · Maxime Riché · Aleksandr Lyzhov · Johannes Treutlein · Allan Dafoe · Jesse Clifton -
2022 : Human-AI Interaction in Selective Prediction Systems »
Elizabeth Bondi-Kelly · Raphael Koster · Hannah Sheahan · Martin Chadwick · Yoram Bachrach · Taylan Cemgil · Ulrich Paquet · Krishnamurthy Dvijotham -
2021 : Closing Remarks »
Gillian Hadfield -
2021 : (Live) Panel Discussion: Cooperative AI »
Kalesha Bullard · Allan Dafoe · Fei Fang · Chris Amato · Elizabeth M. Adams -
2021 : Keynote speakers Q&A »
Sarit Kraus · Drew Fudenberg · Duncan J Watts · Colin Camerer · Johan Ugander · Emma Pierson -
2021 : Modeling Human Decision-Making: Never Ending Learning »
Sarit Kraus -
2021 Poster: Automated Dynamic Mechanism Design »
Hanrui Zhang · Vincent Conitzer -
2020 : Closing Remarks: Eric Horvitz (Microsoft) »
Eric Horvitz -
2020 : Panel: Kate Larson (DeepMind) [moderator], Natasha Jaques (Google), Jeffrey Rosenschein (The Hebrew University of Jerusalem), Michael Wooldridge (University of Oxford) »
Kate Larson · Natasha Jaques · Jeffrey S Rosenschein · Michael Wooldridge -
2020 : Q&A: Sarit Kraus (Bar-Ilan University): Agent-Human Collaboration and Learning for Improving Human Satisfaction, with Natasha Jaques (Google) [moderator] »
Sarit Kraus · Natasha Jaques -
2020 : Q&A: Gillian Hadfield (University of Toronto): The Normative Infrastructure of Cooperation, with Natasha Jaques (Google) [moderator] »
Gillian Hadfield · Natasha Jaques -
2020 : Q&A: Open Problems in Cooperative AI with Thore Graepel (DeepMind), Allan Dafoe (University of Oxford), Yoram Bachrach (DeepMind), and Natasha Jaques (Google) [moderator] »
Thore Graepel · Yoram Bachrach · Allan Dafoe · Natasha Jaques -
2020 : Invited Speaker: Sarit Kraus (Bar-Ilan University) on Agent-Human Collaboration and Learning for Improving Human Satisfaction »
Sarit Kraus -
2020 : Invited Speaker: Gillian Hadfield (University of Toronto) on The Normative Infrastructure of Cooperation »
Gillian Hadfield -
2020 : Open Problems in Cooperative AI: Thore Graepel (DeepMind) and Allan Dafoe (University of Oxford) »
Thore Graepel · Allan Dafoe -
2020 : Welcome: Yoram Bachrach (DeepMind) and Gillian Hadfield (University of Toronto) »
Yoram Bachrach · Gillian Hadfield -
2020 Poster: Learning to Play No-Press Diplomacy with Best Response Policy Iteration »
Thomas Anthony · Tom Eccles · Andrea Tacchetti · János Kramár · Ian Gemp · Thomas Hudson · Nicolas Porcel · Marc Lanctot · Julien Perolat · Richard Everett · Satinder Singh · Thore Graepel · Yoram Bachrach -
2020 Spotlight: Learning to Play No-Press Diplomacy with Best Response Policy Iteration »
Thomas Anthony · Tom Eccles · Andrea Tacchetti · János Kramár · Ian Gemp · Thomas Hudson · Nicolas Porcel · Marc Lanctot · Julien Perolat · Richard Everett · Satinder Singh · Thore Graepel · Yoram Bachrach -
2020 Poster: Learning to summarize with human feedback »
Nisan Stiennon · Long Ouyang · Jeffrey Wu · Daniel Ziegler · Ryan Lowe · Chelsea Voss · Alec Radford · Dario Amodei · Paul Christiano -
2020 Poster: Mitigating Manipulation in Peer Review via Randomized Reviewer Assignments »
Steven Jecmen · Hanrui Zhang · Ryan Liu · Nihar Shah · Vincent Conitzer · Fei Fang -
2020 Poster: Language Models are Few-Shot Learners »
Tom B Brown · Benjamin Mann · Nick Ryder · Melanie Subbiah · Jared Kaplan · Prafulla Dhariwal · Arvind Neelakantan · Pranav Shyam · Girish Sastry · Amanda Askell · Sandhini Agarwal · Ariel Herbert-Voss · Gretchen M Krueger · Tom Henighan · Rewon Child · Aditya Ramesh · Daniel Ziegler · Jeffrey Wu · Clemens Winter · Chris Hesse · Mark Chen · Eric Sigler · Mateusz Litwin · Scott Gray · Benjamin Chess · Jack Clark · Christopher Berner · Sam McCandlish · Alec Radford · Ilya Sutskever · Dario Amodei -
2020 Oral: Language Models are Few-Shot Learners »
Tom B Brown · Benjamin Mann · Nick Ryder · Melanie Subbiah · Jared Kaplan · Prafulla Dhariwal · Arvind Neelakantan · Pranav Shyam · Girish Sastry · Amanda Askell · Sandhini Agarwal · Ariel Herbert-Voss · Gretchen M Krueger · Tom Henighan · Rewon Child · Aditya Ramesh · Daniel Ziegler · Jeffrey Wu · Clemens Winter · Chris Hesse · Mark Chen · Eric Sigler · Mateusz Litwin · Scott Gray · Benjamin Chess · Jack Clark · Christopher Berner · Sam McCandlish · Alec Radford · Ilya Sutskever · Dario Amodei -
2019 Poster: Distinguishing Distributions When Samples Are Strategically Transformed »
Hanrui Zhang · Yu Cheng · Vincent Conitzer -
2019 Poster: Efficient Forward Architecture Search »
Hanzhang Hu · John Langford · Rich Caruana · Saurajit Mukherjee · Eric Horvitz · Debadeepta Dey -
2019 Poster: Bias Correction of Learned Generative Models using Likelihood-Free Importance Weighting »
Aditya Grover · Jiaming Song · Ashish Kapoor · Kenneth Tran · Alekh Agarwal · Eric Horvitz · Stefano Ermon -
2019 Poster: Biases for Emergent Communication in Multi-agent Reinforcement Learning »
Tom Eccles · Yoram Bachrach · Guy Lever · Angeliki Lazaridou · Thore Graepel -
2019 Poster: Staying up to Date with Online Content Changes Using Reinforcement Learning for Scheduling »
Andrey Kolobov · Yuval Peres · Cheng Lu · Eric Horvitz -
2018 Poster: Reward learning from human preferences and demonstrations in Atari »
Borja Ibarz · Jan Leike · Tobias Pohlen · Geoffrey Irving · Shane Legg · Dario Amodei -
2018 Poster: Inequity aversion improves cooperation in intertemporal social dilemmas »
Edward Hughes · Joel Leibo · Matthew Phillips · Karl Tuyls · Edgar Dueñez-Guzman · Antonio García Castañeda · Iain Dunning · Tina Zhu · Kevin McKee · Raphael Koster · Heather Roff · Thore Graepel -
2018 Poster: Re-evaluating evaluation »
David Balduzzi · Karl Tuyls · Julien Perolat · Thore Graepel -
2017 : Incomplete Contracting and AI Alignment »
Gillian Hadfield -
2017 : Invited talk 6 »
Dario Amodei -
2017 Poster: A multi-agent reinforcement learning model of common-pool resource appropriation »
Julien Pérolat · Joel Leibo · Vinicius Zambaldi · Charles Beattie · Karl Tuyls · Thore Graepel -
2017 Poster: A Unified Game-Theoretic Approach to Multiagent Reinforcement Learning »
Marc Lanctot · Vinicius Zambaldi · Audrunas Gruslys · Angeliki Lazaridou · Karl Tuyls · Julien Perolat · David Silver · Thore Graepel -
2017 Poster: Deep Reinforcement Learning from Human Preferences »
Paul Christiano · Jan Leike · Tom Brown · Miljan Martic · Shane Legg · Dario Amodei -
2017 Poster: Estimating Accuracy from Unlabeled Data: A Probabilistic Logic Approach »
Emmanouil Platanios · Hoifung Poon · Tom M Mitchell · Eric Horvitz -
2016 : Concluding Remarks »
Thore Graepel · Frans Oliehoek · Karl Tuyls -
2016 : Introduction »
Thore Graepel · Karl Tuyls · Frans Oliehoek -
2016 Workshop: Learning, Inference and Control of Multi-Agent Systems »
Thore Graepel · Marc Lanctot · Joel Leibo · Guy Lever · Janusz Marecki · Frans Oliehoek · Karl Tuyls · Vicky Holgate -
2014 Tutorial: Computing Game-Theoretic Solutions »
Vincent Conitzer -
2012 Poster: Patient Risk Stratification for Hospital-Associated C. Diff as a Time-Series Classification Task »
Jenna Wiens · John Guttag · Eric Horvitz -
2012 Spotlight: Patient Risk Stratification for Hospital-Associated C. Diff as a Time-Series Classification Task »
Jenna Wiens · John Guttag · Eric Horvitz -
2009 Poster: Breaking Boundaries Between Induction Time and Diagnosis Time Active Information Acquisition »
Ashish Kapoor · Eric Horvitz