Timezone: »

 
Spotlight
Natural Value Approximators: Learning when to Trust Past Estimates
Zhongwen Xu · Joseph Modayil · Hado van Hasselt · Andre Barreto · David Silver · Tom Schaul

Wed Dec 06 05:25 PM -- 05:30 PM (PST) @ Hall A

Neural networks have a smooth initial inductive bias, such that small changes in input do not lead to large changes in output. However, in reinforcement learning domains with sparse rewards, value functions have non-smooth structure with a characteristic asymmetric discontinuity whenever rewards arrive. We propose a mechanism that learns an interpolation between a direct value estimate and a projected value estimate computed from the encountered reward and the previous estimate. This reduces the need to learn about discontinuities, and thus improves the value function approximation. Furthermore, as the interpolation is learned and state-dependent, our method can deal with heterogeneous observability. We demonstrate that this one change leads to significant improvements on multiple Atari games, when applied to the state-of-the-art A3C algorithm.

Author Information

Zhongwen Xu (DeepMind)
Joseph Modayil (Deepmind)
Hado van Hasselt (DeepMind)
Andre Barreto (DeepMind)
David Silver (DeepMind)
Tom Schaul (DeepMind)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors