Timezone: »

 
Poster
Generalization of Reinforcement Learners with Working and Episodic Memory
Meire Fortunato · Melissa Tan · Ryan Faulkner · Steven Hansen · Adrià Puigdomènech Badia · Gavin Buttimore · Charles Deck · Joel Leibo · Charles Blundell

Thu Dec 12 05:00 PM -- 07:00 PM (PST) @ East Exhibition Hall B + C #192

Memory is an important aspect of intelligence and plays a role in many deep reinforcement learning models. However, little progress has been made in understanding when specific memory systems help more than others and how well they generalize. The field also has yet to see a prevalent consistent and rigorous approach for evaluating agent performance on holdout data. In this paper, we aim to develop a comprehensive methodology to test different kinds of memory in an agent and assess how well the agent can apply what it learns in training to a holdout set that differs from the training set along dimensions that we suggest are relevant for evaluating memory-specific generalization. To that end, we first construct a diverse set of memory tasks that allow us to evaluate test-time generalization across multiple dimensions. Second, we develop and perform multiple ablations on an agent architecture that combines multiple memory systems, observe its baseline models, and investigate its performance against the task suite.

Author Information

Meire Fortunato (DeepMind)
Melissa Tan (Deepmind)
Ryan Faulkner (Deepmind)
Steven Hansen (DeepMind)
Adrià Puigdomènech Badia (Google DeepMind)
Gavin Buttimore (DeepMind)
Charlie Deck (Deepmind)
Joel Leibo (DeepMind)
Charles Blundell (DeepMind)

More from the Same Authors