Competition
Lux AI Season 3: Multi-Agent Meta Learning at Scale
Stone Tao · Akarsh Kumar · Bovard Doerschuk-Tiberi · Isabelle Pan · Addison Howard · Hao Su
West Meeting Room 209
The proposed competition revolves around testing the limits of agents (e.g rule-based or Meta RL agents) when it comes to adapting to a game with changing dynamics. We propose a unique 1v1 competition format where both teams face off in a sequence of 5 games. The game mechanics, along with partial observability are designed to ensure that optimal gameplay requires agents to efficiently explore and discover the game dynamics. They ensure that the strongest agents may play "suboptimally" in game 1 to explore, but then win easily in games 2 to 5 by leveraging information gained through game 1 and adapting. This competition provides a GPU parallelized game environment via jax to enable fast training/evaluation on a single GPU, lowering barriers of entry to typically industry-level scales of research. Participants can submit their agents to compete against other submitted agents on a online leaderboard hosted by Kaggle ranked by a Trueskill ranking system. The results of the competition will provide a dataset of top open-sourced rule-based agents as well as many game episodes that can lead to unique analysis (e.g. quantifying emergence/surprise) past competitions cannot usually provide thanks to the number of competitors the Lux AI Challenges often garner.
Live content is unavailable. Log in and register to view live content