Skip to yearly menu bar Skip to main content


Poster

Temporal Regularization for Markov Decision Process

Pierre Thodoroff · Audrey Durand · Joelle Pineau · Doina Precup

Room 517 AB #107

Keywords: [ Reinforcement Learning and Planning ] [ Markov Decision Processes ]


Abstract:

Several applications of Reinforcement Learning suffer from instability due to high variance. This is especially prevalent in high dimensional domains. Regularization is a commonly used technique in machine learning to reduce variance, at the cost of introducing some bias. Most existing regularization techniques focus on spatial (perceptual) regularization. Yet in reinforcement learning, due to the nature of the Bellman equation, there is an opportunity to also exploit temporal regularization based on smoothness in value estimates over trajectories. This paper explores a class of methods for temporal regularization. We formally characterize the bias induced by this technique using Markov chain concepts. We illustrate the various characteristics of temporal regularization via a sequence of simple discrete and continuous MDPs, and show that the technique provides improvement even in high-dimensional Atari games.

Live content is unavailable. Log in and register to view live content