Skip to yearly menu bar Skip to main content


Poster

Universally Expressive Communication in Multi-Agent Reinforcement Learning

Matthew Morris · Thomas D Barrett · Arnu Pretorius

Hall J (level 1) #118

Keywords: [ expressivity ] [ graph neural networks ] [ Communication ] [ multi-agent reinforcement learning ]


Abstract:

Allowing agents to share information through communication is crucial for solving complex tasks in multi-agent reinforcement learning. In this work, we consider the question of whether a given communication protocol can express an arbitrary policy. By observing that many existing protocols can be viewed as instances of graph neural networks (GNNs), we demonstrate the equivalence of joint action selection to node labelling. With standard GNN approaches provably limited in their expressive capacity, we draw from existing GNN literature and consider augmenting agent observations with: (1) unique agent IDs and (2) random noise. We provide a theoretical analysis as to how these approaches yield universally expressive communication, and also prove them capable of targeting arbitrary sets of actions for identical agents. Empirically, these augmentations are found to improve performance on tasks where expressive communication is required, whilst, in general, the optimal communication protocol is found to be task-dependent.

Chat is not available.