Skip to yearly menu bar Skip to main content


Poster

Fighting Copycat Agents in Behavioral Cloning from Observation Histories

Chuan Wen · Jierui Lin · Trevor Darrell · Dinesh Jayaraman · Yang Gao

Poster Session 0 #40

Keywords: [ Reinforcement Learning and Planning ] [ Exploration ]


Abstract:

Imitation learning trains policies to map from input observations to the actions that an expert would choose. In this setting, distribution shift frequently exacerbates the effect of misattributing expert actions to nuisance correlates among the observed variables. We observe that a common instance of this causal confusion occurs in partially observed settings when expert actions are strongly correlated over time: the imitator learns to cheat by predicting the expert's previous action, rather than the next action. To combat this "copycat problem", we propose an adversarial approach to learn a feature representation that removes excess information about the previous expert action nuisance correlate, while retaining the information necessary to predict the next action. In our experiments, our approach improves performance significantly across a variety of partially observed imitation learning tasks.

Chat is not available.