Timezone: »
Agents operating in real-world settings are often faced with the need to adapt to unexpected changes in their environment. Recent advances in multi-agent reinforcement learning (MARL) provide a variety of tools to support the ability of RL agents to deal with the dynamic nature of their environment, which may often be increased by the presence of other agents. In this work, we measure the resilience of a group of agents as the group’s ability to adapt to unexpected perturbations in the environment. To promote resilience, we suggest facilitating collaboration within the group, and offer a novel confusion-based communication protocol that requires an agent to broadcast its local observations that are least aligned with its previous experience. We present empirical evaluation of our approach on a set of simulated multi-taxi settings.
Author Information
Ofir Abu (Hebrew University of Jerusalem)
Sarah Keren (Technion, Technion)
Matthias Gerstgrasser (Harvard University)
Jeffrey S Rosenschein (The Hebrew University of Jerusalem)
More from the Same Authors
-
2021 : Deep Reinforcement Learning Explanation via Model Transforms »
Sarah Keren · Yoav Kolumbus · Jeffrey S Rosenschein · David Parkes · Mira Finkelstein -
2021 : Promoting Resilience in Multi-Agent Reinforcement Learning via Confusion-Based Communication »
Ofir Abu · Matthias Gerstgrasser · Jeffrey S Rosenschein · Sarah Keren -
2021 : Promoting Resilience of Multi-Agent Reinforcement Learning via Confusion-Based Communication »
Ofir Abu · Sarah Keren · Matthias Gerstgrasser · Jeffrey S Rosenschein -
2022 : Enhancing Transfer of Reinforcement Learning Agents with Abstract Contextual Embeddings »
Guy Azran · Mohamad Hosein Danesh · Stefano Albrecht · Sarah Keren -
2022 : Meta-RL for Multi-Agent RL: Learning to Adapt to Evolving Agents »
Matthias Gerstgrasser · David Parkes -
2022 : Selectively Sharing Experiences Improves Multi-Agent Reinforcement Learning »
Matthias Gerstgrasser · Tom Danino · Sarah Keren -
2022 : A (dis-)information theory of revealed and unrevealed preferences »
Nitay Alon · Lion Schulz · Peter Dayan · Jeffrey S Rosenschein -
2022 Spotlight: Lightning Talks 5A-2 »
Qiang LI · Zhiwei Xu · Jiaqi Yang · Thai Hung Le · Haoxuan Qu · Yang Li · Artyom Sorokin · Peirong Zhang · Mira Finkelstein · Nitsan levy · Chung-Yiu Yau · dapeng li · Thommen Karimpanal George · De-Chuan Zhan · Nazar Buzun · Jiajia Jiang · Li Xu · Yichuan Mo · Yujun Cai · Yuliang Liu · Leonid Pugachev · Bin Zhang · Lucy Liu · Hoi-To Wai · Liangliang Shi · Majid Abdolshah · Yoav Kolumbus · Lin Geng Foo · Junchi Yan · Mikhail Burtsev · Lianwen Jin · Yuan Zhan · Dung Nguyen · David Parkes · Yunpeng Baiia · Jun Liu · Kien Do · Guoliang Fan · Jeffrey S Rosenschein · Sunil Gupta · Sarah Keren · Svetha Venkatesh -
2022 Spotlight: Explainable Reinforcement Learning via Model Transforms »
Mira Finkelstein · Nitsan levy · Lucy Liu · Yoav Kolumbus · David Parkes · Jeffrey S Rosenschein · Sarah Keren -
2022 : A (dis-)information theory of revealed and unrevealed preferences »
Nitay Alon · Lion Schulz · Peter Dayan · Jeffrey S Rosenschein -
2022 Poster: Explainable Reinforcement Learning via Model Transforms »
Mira Finkelstein · Nitsan levy · Lucy Liu · Yoav Kolumbus · David Parkes · Jeffrey S Rosenschein · Sarah Keren -
2020 : Panel: Kate Larson (DeepMind) [moderator], Natasha Jaques (Google), Jeffrey Rosenschein (The Hebrew University of Jerusalem), Michael Wooldridge (University of Oxford) »
Kate Larson · Natasha Jaques · Jeffrey S Rosenschein · Michael Wooldridge