Timezone: »
In this talk, I will present the case for the critical role played by third-party enforced rules in the extensive forms of cooperation we see in humans. Cooperation, I’ll argue, cannot be adequately accounted for—or modeled for AI—within the framework of human preferences, coordination incentives or bilateral commitments and reciprocity alone. Cooperation is a group phenomenon and requires group infrastructure to maintain. This insight is critical for training AI agents that can cooperate with humans and, likely, other AI agents. Training environments need to be built with normative infrastructure that enables AI agents to learn and participate in cooperative activities—including the cooperative activity that undergirds all others: collective punishment of agents that violate community norms.
Author Information
Gillian Hadfield (University of Toronto, Vector Institute, and OpenAI)
More from the Same Authors
-
2021 : Closing Remarks »
Gillian Hadfield -
2020 : Q&A: Gillian Hadfield (University of Toronto): The Normative Infrastructure of Cooperation, with Natasha Jaques (Google) [moderator] »
Gillian Hadfield · Natasha Jaques -
2020 Workshop: Cooperative AI »
Thore Graepel · Dario Amodei · Vincent Conitzer · Allan Dafoe · Gillian Hadfield · Eric Horvitz · Sarit Kraus · Kate Larson · Yoram Bachrach -
2020 : Welcome: Yoram Bachrach (DeepMind) and Gillian Hadfield (University of Toronto) »
Yoram Bachrach · Gillian Hadfield -
2017 : Incomplete Contracting and AI Alignment »
Gillian Hadfield