Timezone: »
Current value-based multi-agent reinforcement learning methods optimize individual Q values to guide individuals' behaviours via centralized training with decentralized execution (CTDE). However, such expected, i.e., risk-neutral, Q value is not sufficient even with CTDE due to the randomness of rewards and the uncertainty in environments, which causes the failure of these methods to train coordinating agents in complex environments. To address these issues, we propose RMIX, a novel cooperative MARL method with the Conditional Value at Risk (CVaR) measure over the learned distributions of individuals' Q values. Specifically, we first learn the return distributions of individuals to analytically calculate CVaR for decentralized execution. Then, to handle the temporal nature of the stochastic outcomes during executions, we propose a dynamic risk level predictor for risk level tuning. Finally, we optimize the CVaR policies with CVaR values used to estimate the target in TD error during centralized training and the CVaR values are used as auxiliary local rewards to update the local distribution via Quantile Regression loss. Empirically, we show that our method outperforms many state-of-the-art methods on various multi-agent risk-sensitive navigation scenarios and challenging StarCraft II cooperative tasks, demonstrating enhanced coordination and revealing improved sample efficiency.
Author Information
Wei Qiu (Nanyang Technological University)
Xinrun Wang (Nanyang Technological University)
Runsheng Yu (Xiaomi Intelligent Technology Co., Ltd)
Rundong Wang (Nanyang Technological University)
Xu He (Nanyang Technological University)
Bo An (Nanyang Technological University)
Svetlana Obraztsova (Nanyang Technological University)
Zinovi Rabinovich (Nanyang Technological University)
More from the Same Authors
-
2022 Poster: Generalizing Consistent Multi-Class Classification with Rejection to be Compatible with Arbitrary Losses »
Yuzhou Cao · Tianchi Cai · Lei Feng · Lihong Gu · Jinjie GU · Bo An · Gang Niu · Masashi Sugiyama -
2022 : Policy Resilience to Environment Poisoning Attack on Reinforcement Learning »
Hang Xu · Zinovi Rabinovich -
2022 Spotlight: Deep Attentive Belief Propagation: Integrating Reasoning and Learning for Solving Constraint Optimization Problems »
Yanchen Deng · Shufeng Kong · Caihua Liu · Bo An -
2022 Spotlight: Lightning Talks 6A-1 »
Ziyi Wang · Nian Liu · Yaming Yang · Qilong Wang · Yuanxin Liu · Zongxin Yang · Yizhao Gao · Yanchen Deng · Dongze Lian · Nanyi Fei · Ziyu Guan · Xiao Wang · Shufeng Kong · Xumin Yu · Daquan Zhou · Yi Yang · Fandong Meng · Mingze Gao · Caihua Liu · Yongming Rao · Zheng Lin · Haoyu Lu · Zhe Wang · Jiashi Feng · Zhaolin Zhang · Deyu Bo · Xinchao Wang · Chuan Shi · Jiangnan Li · Jiangtao Xie · Jie Zhou · Zhiwu Lu · Wei Zhao · Bo An · Jiwen Lu · Peihua Li · Jian Pei · Hao Jiang · Cai Xu · Peng Fu · Qinghua Hu · Yijie Li · Weigang Lu · Yanan Cao · Jianbin Huang · Weiping Wang · Zhao Cao · Jie Zhou -
2022 Poster: Alleviating "Posterior Collapse'' in Deep Topic Models via Policy Gradient »
Yewen Li · Chaojie Wang · Zhibin Duan · Dongsheng Wang · Bo Chen · Bo An · Mingyuan Zhou -
2022 Poster: Deep Attentive Belief Propagation: Integrating Reasoning and Learning for Solving Constraint Optimization Problems »
Yanchen Deng · Shufeng Kong · Caihua Liu · Bo An -
2022 Poster: Out-of-Distribution Detection with An Adaptive Likelihood Ratio on Informative Hierarchical VAE »
Yewen Li · Chaojie Wang · Xiaobo Xia · Tongliang Liu · xin miao · Bo An -
2021 Poster: Open-set Label Noise Can Improve Robustness Against Inherent Label Noise »
Hongxin Wei · Lue Tao · RENCHUNZI XIE · Bo An -
2020 : Efficient Reservoir Management through Deep Reinforcement Learning »
Xinrun Wang -
2019 Poster: Manipulating a Learning Defender and Ways to Counteract »
Jiarui Gan · Qingyu Guo · Long Tran-Thanh · Bo An · Michael Wooldridge -
2018 Poster: DeepExposure: Learning to Expose Photos with Asynchronously Reinforced Adversarial Learning »
Runsheng Yu · Wenyu Liu · Yasen Zhang · Zhi Qu · Deli Zhao · Bo Zhang