Timezone: »

Intra-agent speech permits zero-shot task acquisition
Chen Yan · Federico Carnevale · Petko I Georgiev · Adam Santoro · Aurelia Guy · Alistair Muldal · Chia-Chun Hung · Joshua Abramson · Timothy Lillicrap · Gregory Wayne

Wed Nov 30 02:00 PM -- 04:00 PM (PST) @ Hall J #521

Human language learners are exposed to a trickle of informative, context-sensitive language, but a flood of raw sensory data. Through both social language use and internal processes of rehearsal and practice, language learners are able to build high-level, semantic representations that explain their perceptions. Here, we take inspiration from such processes of "inner speech" in humans (Vygotsky, 1934) to better understand the role of intra-agent speech in embodied behavior. First, we formally pose intra-agent speech as a semi-supervised problem and develop two algorithms that enable visually grounded captioning with little labeled language data. We then experimentally compute scaling curves over different amounts of labeled data and compare the data efficiency against a supervised learning baseline. Finally, we incorporate intra-agent speech into an embodied, mobile manipulator agent operating in a 3D virtual world, and show that with as few as 150 additional image captions, intra-agent speech endows the agent with the ability to manipulate and answer questions about a new object without any related task-directed experience (zero-shot). Taken together, our experiments suggest that modelling intra-agent speech is effective in enabling embodied agents to learn new tasks efficiently and without direct interaction experience.

Author Information

Chen Yan (DeepMind)
Federico Carnevale (Google DeepMind)
Petko I Georgiev (Google DeepMind)
Adam Santoro (DeepMind)
Aurelia Guy (University of California Berkeley)
Alistair Muldal (DeepMind)
Chia-Chun Hung (DeepMind)
Joshua Abramson (DeepMind)
Timothy Lillicrap (DeepMind & UCL)
Gregory Wayne (Google DeepMind)

More from the Same Authors