Timezone: »
Poster
Online learning in episodic Markovian decision processes by relative entropy policy search
Alexander Zimin · Gergely Neu
Thu Dec 05 07:00 PM -- 11:59 PM (PST) @ Harrah's Special Events Center, 2nd Floor #None
We study the problem of online learning in finite episodic Markov decision processes where the loss function is allowed to change between episodes. The natural performance measure in this learning problem is the regret defined as the difference between the total loss of the best stationary policy and the total loss suffered by the learner. We assume that the learner is given access to a finite action space $\A$ and the state space $\X$ has a layered structure with $L$ layers, so that state transitions are only possible between consecutive layers. We describe a variant of the recently proposed Relative Entropy Policy Search algorithm and show that its regret after $T$ episodes is $2\sqrt{L\nX\nA T\log(\nX\nA/L)}$ in the bandit setting and $2L\sqrt{T\log(\nX\nA/L)}$ in the full information setting. These guarantees largely improve previously known results under much milder assumptions and cannot be significantly improved under general assumptions.
Author Information
Alexander Zimin (Amazon)
Gergely Neu (Universitat Pompeu Fabra)
More from the Same Authors
-
2020 Poster: A Unifying View of Optimism in Episodic Reinforcement Learning »
Gergely Neu · Ciara Pike-Burke -
2019 Poster: Adaptive Temporal-Difference Learning for Policy Evaluation with Per-State Uncertainty Estimates »
Carlos Riquelme · Hugo Penedones · Damien Vincent · Hartmut Maennel · Sylvain Gelly · Timothy A Mann · Andre Barreto · Gergely Neu -
2019 Poster: Beating SGD Saturation with Tail-Averaging and Minibatching »
Nicole Muecke · Gergely Neu · Lorenzo Rosasco -
2017 Poster: Boltzmann Exploration Done Right »
Nicolò Cesa-Bianchi · Claudio Gentile · Gergely Neu · Gabor Lugosi -
2015 Poster: Explore no more: Improved high-probability regret bounds for non-stochastic bandits »
Gergely Neu -
2014 Poster: Exploiting easy data in online optimization »
Amir Sani · Gergely Neu · Alessandro Lazaric -
2014 Poster: Efficient learning by implicit exploration in bandit problems with side observations »
Tomáš Kocák · Gergely Neu · Michal Valko · Remi Munos -
2014 Spotlight: Exploiting easy data in online optimization »
Amir Sani · Gergely Neu · Alessandro Lazaric -
2014 Poster: Online combinatorial optimization with stochastic decision sets and adversarial losses »
Gergely Neu · Michal Valko -
2010 Spotlight: Online Markov Decision Processes under Bandit Feedback »
Gergely Neu · András György · András Antos · Csaba Szepesvari -
2010 Poster: Online Markov Decision Processes under Bandit Feedback »
Gergely Neu · András György · Csaba Szepesvari · András Antos