Timezone: »

Learning Continuous Control Policies by Stochastic Value Gradients
Nicolas Heess · Gregory Wayne · David Silver · Timothy Lillicrap · Tom Erez · Yuval Tassa

Tue Dec 08 04:00 PM -- 08:59 PM (PST) @ 210 C #31 #None

We present a unified framework for learning continuous control policies usingbackpropagation. It supports stochastic control by treating stochasticity in theBellman equation as a deterministic function of exogenous noise. The productis a spectrum of general policy gradient algorithms that range from model-freemethods with value functions to model-based methods without value functions.We use learned models but only require observations from the environment insteadof observations from model-predicted trajectories, minimizing the impactof compounded model errors. We apply these algorithms first to a toy stochasticcontrol problem and then to several physics-based control problems in simulation.One of these variants, SVG(1), shows the effectiveness of learning models, valuefunctions, and policies simultaneously in continuous domains.

Author Information

Nicolas Heess (Google DeepMind)
Greg Wayne (Google DeepMind)
David Silver (DeepMind)
Timothy Lillicrap (Google DeepMind)
Tom Erez (Google DeepMind)
Yuval Tassa (Google DeepMind)

More from the Same Authors