Skip to yearly menu bar Skip to main content


Poster

Total stochastic gradient algorithms and applications in reinforcement learning

Paavo Parmas

Room 517 AB #152

Keywords: [ Motor Control ] [ Robotics ] [ Model-Based RL ] [ Reinforcement Learning ] [ Decision and Control ] [ Optimization for Deep Networks ] [ Non-Convex Optimization ] [ Gaussian Processes ] [ Graphical Models ] [ Variational Inference ] [ Stochastic Methods ]


Abstract:

Backpropagation and the chain rule of derivatives have been prominent; however, the total derivative rule has not enjoyed the same amount of attention. In this work we show how the total derivative rule leads to an intuitive visual framework for creating gradient estimators on graphical models. In particular, previous ”policy gradient theorems” are easily derived. We derive new gradient estimators based on density estimation, as well as a likelihood ratio gradient, which ”jumps” to an intermediate node, not directly to the objective function. We evaluate our methods on model-based policy gradient algorithms, achieve good performance, and present evidence towards demystifying the success of the popular PILCO algorithm.

Live content is unavailable. Log in and register to view live content