Timezone: »

 
Spotlight
The Nature of Temporal Difference Errors in Multi-step Distributional Reinforcement Learning
Yunhao Tang · Remi Munos · Mark Rowland · Bernardo Avila Pires · Will Dabney · Marc Bellemare

Wed Dec 07 05:00 PM -- 07:00 PM (PST) @

We study the multi-step off-policy learning approach to distributional RL. Despite the apparent similarity between value-based RL and distributional RL, our study reveals intriguing and fundamental differences between the two cases in the multi-step setting. We identify a novel notion of path-dependent distributional TD error, which is indispensable for principled multi-step distributional RL. The distinction from the value-based case bears important implications on concepts such as backward-view algorithms. Our work provides the first theoretical guarantees on multi-step off-policy distributional RL algorithms, including results that apply to the small number of existing approaches to multi-step distributional RL. In addition, we derive a novel algorithm, Quantile Regression-Retrace, which leads to a deep RL agent QR-DQN-Retrace that shows empirical improvements over QR-DQN on the Atari-57 benchmark. Collectively, we shed light on how unique challenges in multi-step distributional RL can be addressed both in theory and practice.

Author Information

Yunhao Tang (Columbia University)

I am a PhD student at Columbia IEOR. My research interests are reinforcement learning and approximate inference.

Remi Munos (DeepMind)
Mark Rowland (DeepMind)
Bernardo Avila Pires (DeepMind)
Will Dabney (DeepMind)
Marc Bellemare (Google Brain)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors