Skip to yearly menu bar Skip to main content


Poster

Building Network Architectures for Continual Reinforcement Learning with Parseval Regularization

Wesley Chung · Lynn Cherif · Doina Precup · David Meger

West Ballroom A-D #6304
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Plasticity loss, trainability loss, and primacy bias have been identified as issues arising when training deep neural networks on sequences of tasks---referring to the increased difficulty in training on new tasks.We propose to use Parseval regularization, which maintains orthogonality of weight matrices, to preserve useful optimization properties and improve training in a continual reinforcement learning setting.We show that it provides significant benefits to RL agents on a suite of gridworld, CARL and MetaWorld tasks.We conduct comprehensive ablations to identify the source of its benefits and investigate the effect of certain metrics associated to network trainability including weight matrix rank, weight norms and policy entropy.

Live content is unavailable. Log in and register to view live content