Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Agent Learning in Open-Endedness Workshop

JaxMARL: Multi-Agent RL Environments in JAX

Alexander Rutherford · Benjamin Ellis · Matteo Gallici · Jonathan Cook · Andrei Lupu · GarĂ°ar Ingvarsson Juto · Timon Willi · Akbir Khan · Christian Schroeder de Witt · Alexandra Souly · Saptarashmi Bandyopadhyay · Mikayel Samvelyan · Minqi Jiang · Robert Lange · Shimon Whiteson · Bruno Lacerda · Nick Hawes · Tim Rocktäschel · Chris Lu · Jakob Foerster

Keywords: [ Multi Agent Reinforcement Learning ] [ Hardware Acceleration ] [ jax ]


Abstract:

Benchmarks play an important role in the development of machine learning algorithms. Reinforcement learning environments are traditionally run on the CPU, limiting their scalability with typical academic compute. However, recent advancements in JAX have enabled the wider use of hardware acceleration to overcome these computational hurdles by producing massively parallel RL training pipelines and environments.This is particularly useful for multi-agent reinforcement learning (MARL) research where not only multiple agents must be considered at each environment step, adding additional computational burden, but also the sample complexity is increased due to non-stationarity, decentralised partial observability, or other MARL challenges. In this paper, we present JaxMARL, the first open-source code base that combines ease-of-use with GPU enabled efficiency, and supports a large number of commonly used MARL environments as well as popular baseline algorithms. Our experiments show that our JAX-based implementations are up to 1400x faster than existing single-threaded baselines. This enables efficient and thorough evaluations, with the potential to alleviate the evaluation crisis of the field. We also introduce and benchmark SMAX, a vectorised, simplified version of the StarCraft Multi-Agent Challenge, which removes the need to run the StarCraft II game engine. This not only enables GPU acceleration, but also provides a more flexible MARL environment, unlocking the potential for self-play, meta-learning, and other future applications in MARL.

Chat is not available.