Timezone: »
Cooperative multi-agent reinforcement learning (MARL) has made prominent progress in recent years. For training efficiency and scalability, most of the MARL algorithms make all agents share the same policy or value network. However, in many complex multi-agent tasks, different agents are expected to possess specific abilities to handle different subtasks. In those scenarios, sharing parameters indiscriminately may lead to similar behavior across all agents, which will limit the exploration efficiency and degrade the final performance. To balance the training complexity and the diversity of agent behavior, we propose a novel framework to learn dynamic subtask assignment (LDSA) in cooperative MARL. Specifically, we first introduce a subtask encoder to construct a vector representation for each subtask according to its identity. To reasonably assign agents to different subtasks, we propose an ability-based subtask selection strategy, which can dynamically group agents with similar abilities into the same subtask. In this way, agents dealing with the same subtask share their learning of specific abilities and different subtasks correspond to different specific abilities. We further introduce two regularizers to increase the representation difference between subtasks and stabilize the training by discouraging agents from frequently changing subtasks, respectively. Empirical results show that LDSA learns reasonable and effective subtask assignment for better collaboration and significantly improves the learning performance on the challenging StarCraft II micromanagement benchmark and Google Research Football.
Author Information
Mingyu Yang (University of Science and Technology of China)
Jian Zhao (University of Science and Technology of China)
Xunhan Hu
Wengang Zhou (University of Science and Technology of China (USTC))
Jiangcheng Zhu
Houqiang Li (University of Science and Technology of China)
More from the Same Authors
-
2022 : Multi-Agent Reinforcement Learning with Shared Resources for Inventory Management »
Yuandong Ding · Mingxiao Feng · Guozi Liu · Wei Jiang · Chuheng Zhang · Li Zhao · Lei Song · Houqiang Li · Yan Jin · Jiang Bian -
2022 : Multi-Agent Reinforcement Learning with Shared Resources for Inventory Management »
Yuandong Ding · Mingxiao Feng · Guozi Liu · Wei Jiang · Chuheng Zhang · Li Zhao · Lei Song · Houqiang Li · Yan Jin · Jiang Bian -
2022 Spotlight: Lightning Talks 3A-3 »
Xu Yan · Zheng Dong · Qiancheng Fu · Jing Tan · Hezhen Hu · Fukun Yin · Weilun Wang · Ke Xu · Heshen Zhan · Wen Liu · Qingshan Xu · Xiaotong Zhao · Chaoda Zheng · Ziheng Duan · Zilong Huang · Xintian Shi · Wengang Zhou · Yew Soon Ong · Pei Cheng · Hujun Bao · Houqiang Li · Wenbing Tao · Jiantao Gao · Bin Kang · Weiwei Xu · Limin Wang · Ruimao Zhang · Tao Chen · Gang Yu · Rynson Lau · Shuguang Cui · Zhen Li -
2022 Spotlight: Hand-Object Interaction Image Generation »
Hezhen Hu · Weilun Wang · Wengang Zhou · Houqiang Li -
2022 Poster: Hand-Object Interaction Image Generation »
Hezhen Hu · Weilun Wang · Wengang Zhou · Houqiang Li -
2021 Poster: Dual Progressive Prototype Network for Generalized Zero-Shot Learning »
Chaoqun Wang · Shaobo Min · Xuejin Chen · Xiaoyan Sun · Houqiang Li -
2021 Poster: Contextual Similarity Aggregation with Self-attention for Visual Re-ranking »
Jianbo Ouyang · Hui Wu · Min Wang · Wengang Zhou · Houqiang Li -
2021 Poster: Probing Inter-modality: Visual Parsing with Self-Attention for Vision-and-Language Pre-training »
Hongwei Xue · Yupan Huang · Bei Liu · Houwen Peng · Jianlong Fu · Houqiang Li · Jiebo Luo -
2020 Poster: Promoting Stochasticity for Expressive Policies via a Simple and Efficient Regularization Method »
Qi Zhou · Yufei Kuang · Zherui Qiu · Houqiang Li · Jie Wang