Timezone: »
While much progress has been made in understanding the minimax sample complexity of reinforcement learning (RL)---the complexity of learning on the worst-case'' instance---such measures of complexity often do not capture the true difficulty of learning. In practice, on an
easy'' instance, we might hope to achieve a complexity far better than that achievable on the worst-case instance. In this work we seek to understand this instance-dependent'' complexity of learning in the setting of RL with linear function approximation. We propose an algorithm, PEDEL, which achieves a fine-grained instance-dependent measure of complexity, the first of its kind in the RL with function approximation setting, thereby capturing the difficulty of learning on each particular problem instance. Through an explicit example, we show that PEDEL yields provable gains over low-regret, minimax-optimal algorithms and that such algorithms are unable to hit the instance-optimal rate. Our approach relies on a novel online experiment design-based procedure which focuses the exploration budget on the
directions'' most relevant to learning a near-optimal policy, and may be of independent interest.
Author Information
Andrew Wagenmaker (University of Washington)
Kevin Jamieson (U Washington)
More from the Same Authors
-
2022 Poster: Active Learning with Safety Constraints »
Romain Camilleri · Andrew Wagenmaker · Jamie Morgenstern · Lalit Jain · Kevin Jamieson -
2022 Poster: Instance-optimal PAC Algorithms for Contextual Bandits »
Zhaoqi Li · Lillian Ratliff · houssam nassif · Kevin Jamieson · Lalit Jain -
2021 : Beyond No Regret: Instance-Dependent PAC Reinforcement Learning »
Andrew Wagenmaker · Kevin Jamieson -
2021 Poster: Selective Sampling for Online Best-arm Identification »
Romain Camilleri · Zhihan Xiong · Maryam Fazel · Lalit Jain · Kevin Jamieson -
2021 Poster: Practical, Provably-Correct Interactive Learning in the Realizable Setting: The Power of True Believers »
Julian Katz-Samuels · Blake Mason · Kevin Jamieson · Rob Nowak -
2021 Poster: Corruption Robust Active Learning »
Yifang Chen · Simon Du · Kevin Jamieson -
2020 Poster: An Empirical Process Approach to the Union Bound: Practical Algorithms for Combinatorial and Linear Bandits »
Julian Katz-Samuels · Lalit Jain · zohar karnin · Kevin Jamieson -
2019 Poster: A New Perspective on Pool-Based Active Classification and False-Discovery Control »
Lalit Jain · Kevin Jamieson -
2019 Poster: Sequential Experimental Design for Transductive Linear Bandits »
Lalit Jain · Kevin Jamieson · Tanner Fiez · Lillian Ratliff -
2019 Poster: Non-Asymptotic Gap-Dependent Regret Bounds for Tabular MDPs »
Max Simchowitz · Kevin Jamieson -
2018 Poster: A Bandit Approach to Sequential Experimental Design with False Discovery Control »
Kevin Jamieson · Lalit Jain