Timezone: »
When one agent interacts with a multi-agent environment, it is challenging to deal with various opponents unseen before. Modeling the behaviors, goals, or beliefs of opponents could help the agent adjust its policy to adapt to different opponents. In addition, it is also important to consider opponents who are learning simultaneously or capable of reasoning. However, existing work usually tackles only one of the aforementioned types of opponents. In this paper, we propose model-based opponent modeling (MBOM), which employs the environment model to adapt to all kinds of opponents. MBOM simulates the recursive reasoning process in the environment model and imagines a set of improving opponent policies. To effectively and accurately represent the opponent policy, MBOM further mixes the imagined opponent policies according to the similarity with the real behaviors of opponents. Empirically, we show that MBOM achieves more effective adaptation than existing methods in a variety of tasks, respectively with different types of opponents, i.e., fixed policy, naive learner, and reasoning learner.
Author Information
XiaoPeng Yu (Peking University, Tsinghua University)
Jiechuan Jiang (Peking University)
Wanpeng Zhang (Peking University)
Haobin Jiang (Peking University)
Zongqing Lu (Peking University)
More from the Same Authors
-
2022 Poster: Learning to Share in Networked Multi-Agent Reinforcement Learning »
Yuxuan Yi · Ge Li · Yaowei Wang · Zongqing Lu -
2022 Poster: Double Check Your State Before Trusting It: Confidence-Aware Bidirectional Offline Model-Based Imagination »
Jiafei Lyu · Xiu Li · Zongqing Lu -
2022 Poster: I2Q: A Fully Decentralized Q-Learning Algorithm »
Jiechuan Jiang · Zongqing Lu -
2022 Poster: Mildly Conservative Q-Learning for Offline Reinforcement Learning »
Jiafei Lyu · Xiaoteng Ma · Xiu Li · Zongqing Lu -
2022 Poster: Towards Human-Level Bimanual Dexterous Manipulation with Reinforcement Learning »
Yuanpei Chen · Tianhao Wu · Shengjie Wang · Xidong Feng · Jiechuan Jiang · Zongqing Lu · Stephen McAleer · Hao Dong · Song-Chun Zhu · Yaodong Yang -
2022 : State Advantage Weighting for Offline RL »
Jiafei Lyu · aicheng Gong · Le Wan · Zongqing Lu · Xiu Li -
2022 Spotlight: Mildly Conservative Q-Learning for Offline Reinforcement Learning »
Jiafei Lyu · Xiaoteng Ma · Xiu Li · Zongqing Lu -
2022 Spotlight: Double Check Your State Before Trusting It: Confidence-Aware Bidirectional Offline Model-Based Imagination »
Jiafei Lyu · Xiu Li · Zongqing Lu -
2022 Spotlight: Towards Human-Level Bimanual Dexterous Manipulation with Reinforcement Learning »
Yuanpei Chen · Tianhao Wu · Shengjie Wang · Xidong Feng · Jiechuan Jiang · Zongqing Lu · Stephen McAleer · Hao Dong · Song-Chun Zhu · Yaodong Yang -
2020 Poster: Learning Individually Inferred Communication for Multi-Agent Cooperation »
gang Ding · Tiejun Huang · Zongqing Lu -
2020 Oral: Learning Individually Inferred Communication for Multi-Agent Cooperation »
gang Ding · Tiejun Huang · Zongqing Lu -
2019 Poster: Learning Fairness in Multi-Agent Systems »
Jiechuan Jiang · Zongqing Lu -
2018 Poster: Learning Attentional Communication for Multi-Agent Cooperation »
Jiechuan Jiang · Zongqing Lu