Timezone: »
Importance sampling (IS) is a common reweighting strategy for off-policy prediction in reinforcement learning. While it is consistent and unbiased, it can result in high variance updates to the weights for the value function. In this work, we explore a resampling strategy as an alternative to reweighting. We propose Importance Resampling (IR) for off-policy prediction, which resamples experience from a replay buffer and applies standard on-policy updates. The approach avoids using importance sampling ratios in the update, instead correcting the distribution before the update. We characterize the bias and consistency of IR, particularly compared to Weighted IS (WIS). We demonstrate in several microworlds that IR has improved sample efficiency and lower variance updates, as compared to IS and several variance-reduced IS strategies, including variants of WIS and V-trace which clips IS ratios. We also provide a demonstration showing IR improves over IS for learning a value function from images in a racing car simulator.
Author Information
Matthew Schlegel (University of Alberta)
An AI and coffee enthusiast with research experience in RL and ML. Currently pursuing a PhD at the University of Alberta! Excited about off-policy policy evaluation, general value functions, understanding the behavior of artificial neural networks, and cognitive science (specifically cognitive neuroscience).
Wes Chung (McGill University)
Daniel Graves (Huawei Technologies Canada)
Jian Qian (University of Alberta)
Martha White (University of Alberta)
More from the Same Authors
-
2020 Poster: An implicit function learning approach for parametric modal regression »
Yangchen Pan · Ehsan Imani · Amir-massoud Farahmand · Martha White -
2020 Poster: Towards Safe Policy Improvement for Non-Stationary MDPs »
Yash Chandak · Scott Jordan · Georgios Theocharous · Martha White · Philip Thomas -
2020 Spotlight: Towards Safe Policy Improvement for Non-Stationary MDPs »
Yash Chandak · Scott Jordan · Georgios Theocharous · Martha White · Philip Thomas -
2020 Session: Orals & Spotlights Track 14: Reinforcement Learning »
Deepak Pathak · Martha White -
2019 Workshop: The Optimization Foundations of Reinforcement Learning »
Bo Dai · Niao He · Nicolas Le Roux · Lihong Li · Dale Schuurmans · Martha White -
2019 Poster: Learning Macroscopic Brain Connectomes via Group-Sparse Factorization »
Farzane Aminmansour · Andrew Patterson · Lei Le · Yisu Peng · Daniel Mitchell · Franco Pestilli · Cesar F Caiafa · Russell Greiner · Martha White -
2019 Poster: Meta-Learning Representations for Continual Learning »
Khurram Javed · Martha White -
2018 Poster: Supervised autoencoders: Improving generalization performance with unsupervised regularizers »
Lei Le · Andrew Patterson · Martha White -
2018 Poster: Context-dependent upper-confidence bounds for directed exploration »
Raksha Kumaraswamy · Matthew Schlegel · Adam White · Martha White -
2018 Poster: An Off-policy Policy Gradient Theorem Using Emphatic Weightings »
Ehsan Imani · Eric Graves · Martha White -
2016 Poster: Estimating the class prior and posterior from noisy positives and unlabeled data »
Shantanu Jain · Martha White · Predrag Radivojac -
2012 Poster: Convex Multi-view Subspace Learning »
Martha White · Yao-Liang Yu · Xinhua Zhang · Dale Schuurmans -
2010 Poster: Relaxed Clipping: A Global Training Method for Robust Regression and Classification »
Yao-Liang Yu · Min Yang · Linli Xu · Martha White · Dale Schuurmans -
2010 Poster: Interval Estimation for Reinforcement-Learning Algorithms in Continuous-State Domains »
Martha White · Adam M White