Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Multi-Agent Security: Security as Key to AI Safety

Cooperative AI via Decentralized Commitment Devices

Xyn Sun · Davide Crapis · Matt Stephenson · Jonathan Passerat-Palmbach

Keywords: [ multi-agent security ] [ cooperative AI ] [ credible commitment devices ] [ Multi-Agent Reinforcement Learning (MARL) ] [ Maximal Extractable Value (MEV) ]


Abstract:

Credible commitment devices have been a popular approach for robust multi-agent coordination. However, existing commitment mechanisms face limitations like privacy, integrity, and susceptibility to mediator or user strategic behavior. It is unclear if the cooperative AI techniques we study are robust to real-world incentives and attack vectors. Fortunately, decentralized commitment devices that utilize cryptography have been deployed in the wild, and numerous studies have shown their ability to coordinate algorithmic agents, especially when agents face rational or sometimes adversarial opponents with significant economic incentives, currently in the order of several million to billions of dollars. In this paper, we illustrate potential security issues in cooperative AI via examples in the decentralization literature and, in particular, Maximal Extractable Value (MEV). We call for expanded research into decentralized commitments to advance cooperative AI capabilities for secure coordination in open environments and empirical testing frameworks to evaluate multi-agent coordination ability given real-world commitment constraints.

Chat is not available.