Timezone: »
Poster
Weighted QMIX: Expanding Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning
Tabish Rashid · Gregory Farquhar · Bei Peng · Shimon Whiteson
QMIX is a popular $Q$-learning algorithm for cooperative MARL in the centralised training and decentralised execution paradigm.
In order to enable easy decentralisation, QMIX restricts the joint action $Q$-values it can represent to be a monotonic mixing of each agent's utilities.
However, this restriction prevents it from representing value functions in which an agent's ordering over its actions can depend on other agents' actions.
To analyse this representational limitation, we first formalise the objective QMIX optimises, which allows us to view QMIX as an operator
that first computes the $Q$-learning targets and then projects them into the space representable by QMIX.
This projection returns a representable $Q$-value that minimises the unweighted squared error across all joint actions.
We show in particular that this projection can fail to recover the optimal policy even with access to $Q^*$, which primarily stems from the equal weighting placed on each joint action.
We rectify this by introducing a weighting into the projection, in order to place more importance on the better joint actions.
We propose two weighting schemes and prove that they recover the correct maximal action for any joint action $Q$-values, and therefore for $Q^*$ as well.
Based on our analysis and results in the tabular setting we introduce two scalable versions of our algorithm, Centrally-Weighted (CW) QMIX and Optimistically-Weighted (OW) QMIX and demonstrate improved performance on both predator-prey and challenging multi-agent StarCraft benchmark tasks (Samvelyan et al., 2019).
Author Information
Tabish Rashid (University of Oxford)
Gregory Farquhar (University of Oxford)
Bei Peng (University of Oxford)
Shimon Whiteson (University of Oxford)
More from the Same Authors
-
2020 Poster: Can Q-Learning with Graph Networks Learn a Generalizable Branching Heuristic for a SAT Solver? »
Vitaly Kurin · Saad Godil · Shimon Whiteson · Bryan Catanzaro -
2020 Poster: Learning Retrospective Knowledge with Reverse Reinforcement Learning »
Shangtong Zhang · Vivek Veeriah · Shimon Whiteson -
2019 Poster: MAVEN: Multi-Agent Variational Exploration »
Anuj Mahajan · Tabish Rashid · Mikayel Samvelyan · Shimon Whiteson -
2019 Poster: Loaded DiCE: Trading off Bias and Variance in Any-Order Score Function Gradient Estimators for Reinforcement Learning »
Gregory Farquhar · Shimon Whiteson · Jakob Foerster -
2019 Poster: Multi-Agent Common Knowledge Reinforcement Learning »
Christian Schroeder de Witt · Jakob Foerster · Gregory Farquhar · Philip Torr · Wendelin Boehmer · Shimon Whiteson -
2019 Poster: DAC: The Double Actor-Critic Architecture for Learning Options »
Shangtong Zhang · Shimon Whiteson -
2019 Poster: Fast Efficient Hyperparameter Tuning for Policy Gradient Methods »
Supratik Paul · Vitaly Kurin · Shimon Whiteson -
2019 Poster: VIREL: A Variational Inference Framework for Reinforcement Learning »
Matthew Fellows · Anuj Mahajan · Tim G. J. Rudner · Shimon Whiteson -
2019 Spotlight: VIREL: A Variational Inference Framework for Reinforcement Learning »
Matthew Fellows · Anuj Mahajan · Tim G. J. Rudner · Shimon Whiteson -
2019 Poster: Generalized Off-Policy Actor-Critic »
Shangtong Zhang · Wendelin Boehmer · Shimon Whiteson -
2017 Poster: Dynamic-Depth Context Tree Weighting »
Joao V Messias · Shimon Whiteson -
2016 Poster: Learning to Communicate with Deep Multi-Agent Reinforcement Learning »
Jakob Foerster · Ioannis Assael · Nando de Freitas · Shimon Whiteson -
2015 Poster: Copeland Dueling Bandits »
Masrour Zoghi · Zohar Karnin · Shimon Whiteson · Maarten de Rijke