Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Machine Learning for Systems

Learning Collaborative Information Dissemination with Graph-based Multi-Agent Reinforcement Learning

Raffaele Galliera · K. Brent Venable · Matteo Bassani · Niranjan Suri


Abstract:

In modern communication systems, efficient and reliable information dissemination is crucial for supporting critical operations across domains like disaster response, autonomous vehicles, and sensor networks. This paper introduces a Multi-Agent Reinforcement Learning (MARL) approach as a significant step forward in achieving more decentralized, efficient, and collaborative solutions. We propose a Partially Observable Stochastic Game (POSG) formulation for information dissemination empowering each agent to decide on message forwarding independently, based on their one-hop neighborhood and the degree of connectivity of each neighbor. This constitutes a significant paradigm shift from traditional heuristics based on Multi-Point Relay (MPR) selection. Our approach harnesses Graph Convolutional Reinforcement Learning, employing Graph Attention Networks (GAT) with dynamic attention to capture essential network features. We propose two approaches, L-DGN and HL-DGN, which differ in the information that is exchanged among agents. We evaluate the performance of our decentralized approaches, by comparing them with a widely-used MPR heuristic, and we show that our trained policies are able to efficiently cover the network while bypassing the MPR set selection process. Our approach promises a first step toward supporting the resilience of real-world broadcast communication infrastructures via learned, collaborative information dissemination.

Chat is not available.