The essence of exploration is acting to try to decrease uncertainty. We propose a new methodology for representing uncertainty in continuous-state control problems. Our approach, multi-resolution exploration (MRE), uses a hierarchical mapping to identify regions of the state space that would benefit from additional samples. We demonstrate MRE's broad utility by using it to speed up learning in a prototypical model-based and value-based reinforcement-learning method. Empirical results show that MRE improves upon state-of-the-art exploration approaches.
Ali Nouri (Rutgers University)
Michael L Littman (Rutgers University)
Michael L. Littman is professor and chair of the Department of Computer Science at Rutgers University and directs the Rutgers Laboratory for Real-Life Reinforcement Learning (RL3). His research in machine learning examines algorithms for decision making under uncertainty. Littman has earned multiple awards for teaching and his research has been recognized with three best-paper awards on the topics of meta-learning for computer crossword solving, complexity analysis of planning under uncertainty, and algorithms for efficient reinforcement learning. He has served on the editorial boards for several machine-learning journals and was Programme Co-chair of ICML 2009.
More from the Same Authors
2009 Tutorial: Model-Based Reinforcement Learning »
Michael L Littman
2008 Spotlight: Multi-resolution Exploration in Continuous Spaces »
Ali Nouri · Michael L Littman
2007 Poster: Online Linear Regression and Its Application to Model-Based Reinforcement Learning »
Alexander L Strehl · Michael L Littman