Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Deep Reinforcement Learning

Understanding and Preventing Capacity Loss in Reinforcement Learning

Clare Lyle · Mark Rowland · Will Dabney


Abstract:

The reinforcement learning (RL) problem is rife with sources of non-stationaritythat can destabilize or inhibit learning progress. We identify a key mechanismby which this occurs in agents using neural networks as function approximators:capacity loss, whereby networks trained to predict a sequence of target values losetheir ability to quickly fit new functions over time. We demonstrate that capacityloss occurs in a broad range of RL agents and environments, and is particularlydamaging to learning progress in sparse-reward tasks. We then present a simpleregularizer, Initial Feature Regularization (InFeR), that mitigates this phenomenonby regressing a subspace of features towards its value at initialization, improvingperformance over a state-of-the-art model-free algorithm in the Atari 2600 suite.Finally, we study how this regularization affects different notions of capacity andevaluate other mechanisms by which it may improve performance.

Chat is not available.