Timezone: »
A central question in reinforcement learning (RL) is how to leverage prior knowledge to accelerate learning in new tasks. We propose a Bayesian exploration method for lifelong reinforcement learning (BLRL) that aims to learn a Bayesian posterior that distills the common structure shared across different tasks. We further derive a sample complexity analysis of BLRL in the finite MDP setting. To scale our approach, we propose a variational Bayesian Lifelong Learning (VBLRL) algorithm that is based on Bayesian neural networks, can be combined with recent model-based RL methods, and exhibits backward transfer. Experimental results on three challenging domains show that our algorithms adapt to new tasks faster than state-of-the-art lifelong RL methods.
Author Information
Haotian Fu (Brown University)
Shangqun Yu (Brown University)
Michael Littman (Brown University)
George Konidaris (Brown University)
More from the Same Authors
-
2022 Spotlight: Evaluation beyond Task Performance: Analyzing Concepts in AlphaZero in Hex »
Charles Lovering · Jessica Forde · George Konidaris · Ellie Pavlick · Michael Littman -
2022 Poster: Faster Deep Reinforcement Learning with Slower Online Network »
Kavosh Asadi · Rasool Fakoor · Omer Gottesman · Taesup Kim · Michael Littman · Alexander Smola -
2022 Poster: Evaluation beyond Task Performance: Analyzing Concepts in AlphaZero in Hex »
Charles Lovering · Jessica Forde · George Konidaris · Ellie Pavlick · Michael Littman -
2022 Poster: Model-based Lifelong Reinforcement Learning with Bayesian Exploration »
Haotian Fu · Shangqun Yu · Michael Littman · George Konidaris -
2021 Poster: On the Expressivity of Markov Reward »
David Abel · Will Dabney · Anna Harutyunyan · Mark Ho · Michael Littman · Doina Precup · Satinder Singh -
2021 Poster: Learning Markov State Abstractions for Deep Reinforcement Learning »
Cameron Allen · Neev Parikh · Omer Gottesman · George Konidaris -
2021 Oral: On the Expressivity of Markov Reward »
David Abel · Will Dabney · Anna Harutyunyan · Mark Ho · Michael Littman · Doina Precup · Satinder Singh -
2016 Workshop: The Future of Interactive Machine Learning »
Kory Mathewson @korymath · Kaushik Subramanian · Mark Ho · Robert Loftin · Joseph L Austerweil · Anna Harutyunyan · Doina Precup · Layla El Asri · Matthew Gombolay · Jerry Zhu · Sonia Chernova · Charles Isbell · Patrick M Pilarski · Weng-Keen Wong · Manuela Veloso · Julie A Shah · Matthew Taylor · Brenna Argall · Michael Littman -
2016 Oral: Showing versus doing: Teaching by demonstration »
Mark Ho · Michael Littman · James MacGlashan · Fiery Cushman · Joseph L Austerweil -
2016 Poster: Showing versus doing: Teaching by demonstration »
Mark Ho · Michael Littman · James MacGlashan · Fiery Cushman · Joe Austerweil · Joseph L Austerweil -
2011 Poster: TD_gamma: Re-evaluating Complex Backups in Temporal Difference Learning »
George Konidaris · Scott Niekum · Philip Thomas -
2010 Poster: Constructing Skill Trees for Reinforcement Learning Agents from Demonstration Trajectories »
George Konidaris · Scott R Kuindersma · Andrew G Barto · Roderic A Grupen -
2009 Poster: Skill Discovery in Continuous Reinforcement Learning Domains using Skill Chaining »
George Konidaris · Andrew G Barto -
2009 Spotlight: Skill Discovery in Continuous Reinforcement Learning Domains using Skill Chaining »
George Konidaris · Andrew G Barto