Timezone: »
Reinforcement learning (RL) algorithms are often categorized as either on-policy or off-policy depending on whether they use data from a target policy of interest or from a different behavior policy. In this paper, we study a subtle distinction between on-policy data and on-policy sampling in the context of the RL sub-problem of policy evaluation. We observe that on-policy sampling may fail to match the expected distribution of on-policy data after observing only a finite number of trajectories and this failure hinders data-efficient policy evaluation. Towards improved data-efficiency, we show how non-i.i.d., off-policy sampling can produce data that more closely matches the expected on-policy data distribution and consequently increases the accuracy of the Monte Carlo estimator for policy evaluation. We introduce a method called Robust On-Policy Sampling and demonstrate theoretically and empirically that it produces data that converges faster to the expected on-policy distribution compared to on-policy sampling. Empirically, we show that this faster convergence leads to lower mean squared error policy value estimates.
Author Information
Rujie Zhong (University of Edinburgh)
Duohan Zhang (University of Wisconsin Madison)
Lukas Schäfer (University of Edinburgh)
Stefano Albrecht (University of Edinburgh)
Josiah Hanna (University of Wisconsin -- Madison)
More from the Same Authors
-
2021 : Benchmarking Multi-Agent Deep Reinforcement Learning Algorithms in Cooperative Tasks »
Georgios Papoudakis · Filippos Christianos · Lukas Schäfer · Stefano Albrecht -
2021 : Safe Evaluation For Offline Learning: \\Are We Ready To Deploy? »
Hager Radi · Josiah Hanna · Peter Stone · Matthew Taylor -
2021 : Safe Evaluation For Offline Learning: \\Are We Ready To Deploy? »
Hager Radi · Josiah Hanna · Peter Stone · Matthew Taylor -
2021 : Robust On-Policy Data Collection for Data-Efficient Policy Evaluation »
Rujie Zhong · Josiah Hanna · Lukas Schäfer · Stefano Albrecht -
2022 : Enhancing Transfer of Reinforcement Learning Agents with Abstract Contextual Embeddings »
Guy Azran · Mohamad Hosein Danesh · Stefano Albrecht · Sarah Keren -
2022 : Verifiable Goal Recognition for Autonomous Driving with Occlusions »
Cillian Brewitt · Massimiliano Tamborski · Stefano Albrecht -
2022 : Scaling Marginalized Importance Sampling to High-Dimensional State-Spaces via State Abstraction »
Brahma Pavse · Josiah Hanna -
2022 : Sample Relationships through the Lens of Learning Dynamics with Label Information »
Shangmin Guo · Yi Ren · Stefano Albrecht · Kenny Smith -
2022 : Learning Representations for Reinforcement Learning with Hierarchical Forward Models »
Trevor McInroe · Lukas Schäfer · Stefano Albrecht -
2022 : Temporal Disentanglement of Representations for Improved Generalisation in Reinforcement Learning »
Mhairi Dunion · Trevor McInroe · Kevin Sebastian Luck · Josiah Hanna · Stefano Albrecht -
2021 Poster: Agent Modelling under Partial Observability for Deep Reinforcement Learning »
Georgios Papoudakis · Filippos Christianos · Stefano Albrecht -
2020 Poster: Shared Experience Actor-Critic for Multi-Agent Reinforcement Learning »
Filippos Christianos · Lukas Schäfer · Stefano Albrecht