Poster
Almost Horizon-Free Structure-Aware Best Policy Identification with a Generative Model
Andrea Zanette · Mykel J Kochenderfer · Emma Brunskill
East Exhibition Hall B, C #178
Keywords: [ Reinforcement Learning ] [ Reinforcement Learning and Planning ] [ Reinforcement Learning and Planning; Reinforcement Learning and Planning ] [ Markov Decision Processes; Reinforcement Learning a ]
[
Abstract
]
Abstract:
This paper focuses on the problem of computing an $\epsilon$-optimal policy in a discounted Markov Decision Process (MDP) provided that we can access the reward and transition function through a generative model. We propose an algorithm that is initially agnostic to the MDP but that can leverage the specific MDP structure, expressed in terms of variances of the rewards and next-state value function, and gaps in the optimal action-value function to reduce the sample complexity needed to find a good policy, precisely highlighting the contribution of each state-action pair to the final sample complexity. A key feature of our analysis is that it removes all horizon dependencies in the sample complexity of suboptimal actions except for the intrinsic scaling of the value function and a constant additive term.
Live content is unavailable. Log in and register to view live content