RL for Latent MDPs: Regret Guarantees and a Lower Bound

Jeongyeol Kwon · Yonathan Efroni · Constantine Caramanis · Shie Mannor

Keywords: [ Reinforcement Learning and Planning ]

[ Abstract ]
Thu 9 Dec 8:30 a.m. PST — 10 a.m. PST
Spotlight presentation:

Abstract: In this work, we consider the regret minimization problem for reinforcement learning in latent Markov Decision Processes (LMDP). In an LMDP, an MDP is randomly drawn from a set of $M$ possible MDPs at the beginning of the interaction, but the identity of the chosen MDP is not revealed to the agent. We first show that a general instance of LMDPs requires at least $\Omega((SA)^M)$ episodes to even approximate the optimal policy. Then, we consider sufficient assumptions under which learning good policies requires polynomial number of episodes. We show that the key link is a notion of separation between the MDP system dynamics. With sufficient separation, we provide an efficient algorithm with local guarantee, {\it i.e.,} providing a sublinear regret guarantee when we are given a good initialization. Finally, if we are given standard statistical sufficiency assumptions common in the Predictive State Representation (PSR) literature (e.g., \cite{boots2011online}) and a reachability assumption, we show that the need for initialization can be removed.

Chat is not available.