Skip to yearly menu bar Skip to main content


Poster

Regularizing Trajectory Optimization with Denoising Autoencoders

Rinu Boney · Norman Di Palo · Mathias Berglund · Alexander Ilin · Juho Kannala · Antti Rasmus · Harri Valpola

East Exhibition Hall B + C #190

Keywords: [ Reinforcement Learning ] [ Reinforcement Learning and Planning -> Model-Based RL; Reinforcement Learning and Planning ] [ Reinforcement Learning and Planning ]


Abstract:

Trajectory optimization using a learned model of the environment is one of the core elements of model-based reinforcement learning. This procedure often suffers from exploiting inaccuracies of the learned model. We propose to regularize trajectory optimization by means of a denoising autoencoder that is trained on the same trajectories as the model of the environment. We show that the proposed regularization leads to improved planning with both gradient-based and gradient-free optimizers. We also demonstrate that using regularized trajectory optimization leads to rapid initial learning in a set of popular motor control tasks, which suggests that the proposed approach can be a useful tool for improving sample efficiency.

Live content is unavailable. Log in and register to view live content