Skip to yearly menu bar Skip to main content


Talk
in
Competition: Competition Track Day 3: Overviews + Breakout Sessions

BASALT: A MineRL Competition on Solving Human-Judged Task + Q&A

Rohin Shah · Cody Wild · Steven Wang · Neel Alex · Brandon Houghton · William Guss · Sharada Mohanty · Stephanie Milani · Nicholay Topin · Pieter Abbeel · Stuart Russell · Anca Dragan


Abstract:

The Benchmark for Agents that Solve Almost-Lifelike Tasks (BASALT) competition aims to promote research in the area of learning from human feedback in order to enable agents that can pursue tasks that do not have crisp, easily defined reward functions. We provide tasks consisting of a simple English language description alongside a Gym environment, without any associated reward function, but with expert demos. Participants will train agents for these tasks using their preferred methods. We expect typical solutions will use imitation learning, or learning from comparisons. Submitted agents will be evaluated based on how well they complete the tasks, as judged by humans given the same description of the tasks.