Timezone: »
Agents operating in real-world settings are often faced with the need to adapt to unexpected changes in their environment. Recent advances in multi-agent reinforcement learning (MARL) provide a variety of tools to support the ability of RL agents to deal with the dynamic nature of their environment, which may often be increased by the presence of other agents. In this work, we measure the resilience of a group of agents as the group’s ability to adapt to unexpected perturbations in the environment. To promote resilience, we suggest facilitating collaboration within the group, and offer a novel confusion-based communication protocol that requires an agent to broadcast its local observations that are least aligned with its previous experience. We present empirical evaluation of our approach on a set of simulated multi-taxi settings.
Author Information
Ofir Abu (Hebrew University of Jerusalem)
Sarah Keren (Technion, Technion)
Matthias Gerstgrasser (Harvard University)
Jeffrey S Rosenschein (The Hebrew University of Jerusalem)
More from the Same Authors
-
2021 : Deep Reinforcement Learning Explanation via Model Transforms »
Sarah Keren · Yoav Kolumbus · Jeffrey S Rosenschein · David Parkes · Mira Finkelstein -
2021 : Promoting Resilience of Multi-Agent Reinforcement Learning via Confusion-Based Communication »
Ofir Abu · Sarah Keren · Matthias Gerstgrasser · Jeffrey S Rosenschein -
2021 : Promoting Resilience in Multi-Agent Reinforcement Learning via Confusion-Based Communication »
Ofir Abu · Matthias Gerstgrasser · Jeffrey S Rosenschein · Sarah Keren -
2020 : Panel: Kate Larson (DeepMind) [moderator], Natasha Jaques (Google), Jeffrey Rosenschein (The Hebrew University of Jerusalem), Michael Wooldridge (University of Oxford) »
Kate Larson · Natasha Jaques · Jeffrey S Rosenschein · Michael Wooldridge