Skip to yearly menu bar Skip to main content


Poster

Automatic Data Augmentation for Generalization in Reinforcement Learning

Roberta Raileanu · Maxwell Goldstein · Denis Yarats · Ilya Kostrikov · Rob Fergus

Keywords: [ Machine Learning ] [ Reinforcement Learning and Planning ]


Abstract:

Deep reinforcement learning (RL) agents often fail to generalize beyond their training environments. To alleviate this problem, recent work has proposed the use of data augmentation. However, different tasks tend to benefit from different types of augmentations and selecting the right one typically requires expert knowledge. In this paper, we introduce three approaches for automatically finding an effective augmentation for any RL task. These are combined with two novel regularization terms for the policy and value function, required to make the use of data augmentation theoretically sound for actor-critic algorithms. Our method achieves a new state-of-the-art on the Procgen benchmark and outperforms popular RL algorithms on DeepMind Control tasks with distractors. In addition, our agent learns policies and representations which are more robust to changes in the environment that are irrelevant for solving the task, such as the background.

Chat is not available.