Skip to yearly menu bar Skip to main content


Poster

Neural MMO 2.0: A Massively Multi-task Addition to Massively Multi-agent Learning

Joseph Suarez · David Bloomin · Kyoung Whan Choe · Hao Xiang Li · Ryan Sullivan · Nishaanth Kanna · Daniel Scott · Rose Shuman · Herbie Bradley · Louis Castricato · Phillip Isola · Chenghui Yu · Yuhao Jiang · Qimai Li · Jiaxin Chen · Xiaolong Zhu

Great Hall & Hall B1+B2 (level 1) #2014

Abstract:

Neural MMO 2.0 is a massively multi-agent and multi-task environment for reinforcement learning research. This version features a novel task-system that broadens the range of training settings and poses a new challenge in generalization: evaluation on and against tasks, maps, and opponents never seen during training. Maps are procedurally generated with 128 agents in the standard setting and 1-1024 supported overall. Version 2.0 is a complete rewrite of its predecessor with three-fold improved performance, effectively addressing simulation bottlenecks in online training. Enhancements to compatibility enable training with standard reinforcement learning frameworks designed for much simpler environments. Neural MMO 2.0 is free and open-source with comprehensive documentation available at neuralmmo.github.io and an active community Discord. To spark initial research on this new platform, we are concurrently running a competition at NeurIPS 2023.

Chat is not available.