Skip to yearly menu bar Skip to main content


Poster

Generalization of Reinforcement Learners with Working and Episodic Memory

Meire Fortunato · Melissa Tan · Ryan Faulkner · Steven Hansen · Adrià Puigdomènech Badia · Gavin Buttimore · Charles Deck · Joel Leibo · Charles Blundell

East Exhibition Hall B + C #192

Keywords: [ Data, Challenges, Implementations, and Software -> Virtual Environments; Deep Learning ] [ Memory-Augmented Neural Networks; Neu ] [ Deep Learning ]


Abstract:

Memory is an important aspect of intelligence and plays a role in many deep reinforcement learning models. However, little progress has been made in understanding when specific memory systems help more than others and how well they generalize. The field also has yet to see a prevalent consistent and rigorous approach for evaluating agent performance on holdout data. In this paper, we aim to develop a comprehensive methodology to test different kinds of memory in an agent and assess how well the agent can apply what it learns in training to a holdout set that differs from the training set along dimensions that we suggest are relevant for evaluating memory-specific generalization. To that end, we first construct a diverse set of memory tasks that allow us to evaluate test-time generalization across multiple dimensions. Second, we develop and perform multiple ablations on an agent architecture that combines multiple memory systems, observe its baseline models, and investigate its performance against the task suite.

Live content is unavailable. Log in and register to view live content