Timezone: »

 
Spotlight
Meta-trained agents implement Bayes-optimal agents
Vladimir Mikulik · Grégoire Delétang · Tom McGrath · Tim Genewein · Miljan Martic · Shane Legg · Pedro Ortega

Wed Dec 09 07:00 AM -- 07:10 AM (PST) @ Orals & Spotlights: Continual/Meta/Misc Learning

Memory-based meta-learning is a powerful technique to build agents that adapt fast to any task within a target distribution. A previous theoretical study has argued that this remarkable performance is because the meta-training protocol incentivises agents to behave Bayes-optimally. We empirically investigate this claim on a number of prediction and bandit tasks. Inspired by ideas from theoretical computer science, we show that meta-learned and Bayes-optimal agents not only behave alike, but they even share a similar computational structure, in the sense that one agent system can approximately simulate the other. Furthermore, we show that Bayes-optimal agents are fixed points of the meta-learning dynamics. Our results suggest that memory-based meta-learning is a general technique for numerically approximating Bayes-optimal agents; that is, even for task distributions for which we currently don't possess tractable models.

Author Information

Vlad Mikulik (Google DeepMind)
Grégoire Delétang (DeepMind)
Tom McGrath (Deepmind)
Tim Genewein (DeepMind)
Miljan Martic (DeepMind)
Shane Legg (DeepMind)
Pedro Ortega (DeepMind)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors