Skip to yearly menu bar Skip to main content


Poster

The MAGICAL Benchmark for Robust Imitation

Sam Toyer · Rohin Shah · Andrew Critch · Stuart Russell

Poster Session 1 #558

Keywords: [ Neuroscience and Cognitive Science ] [ Brain Imaging ] [ Regression ] [ Algorithms ]


Abstract:

Imitation Learning (IL) algorithms are typically evaluated in the same environment that was used to create demonstrations. This rewards precise reproduction of demonstrations in one particular environment, but provides little information about how robustly an algorithm can generalise the demonstrator's intent to substantially different deployment settings. This paper presents the MAGICAL benchmark suite, which permits systematic evaluation of generalisation by quantifying robustness to different kinds of distribution shift that an IL algorithm is likely to encounter in practice. Using the MAGICAL suite, we confirm that existing IL algorithms overfit significantly to the context in which demonstrations are provided. We also show that standard methods for reducing overfitting are effective at creating narrow perceptual invariances, but are not sufficient to enable transfer to contexts that require substantially different behaviour, which suggests that new approaches will be needed in order to robustly generalise demonstrator intent. Code and data for the MAGICAL suite is available at https://github.com/qxcv/magical/

Chat is not available.