Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Distribution Shifts: New Frontiers with Foundation Models

Stochastic linear dynamics in parameters to deal with Neural Networks plasticity loss

Alexandre Galashov · Michalis Titsias · Razvan Pascanu · Yee Whye Teh · Maneesh Sahani

Keywords: [ non-stationarity ] [ plasticity loss ] [ Online Learning ]


Abstract:

Plasticity loss has become an active topic of interest in the continual learning community. Briefly, when faced with non-stationary data, normal gradient descent losses over time the ability to train. It can take different subtle forms, from the inability of the network to generalize to its inability to optimize the training objective, and can have different causes like ill-conditioning or the saturation of activation functions. In this work we focus on the inability of neural network to optimize due to saturating activations, which particularly affects online reinforcement learning settings, where the learning process itself creates a non-stationary setting even if the environment is kept fixed. Recent works have proposed to answer this problem by relying on dynamically resetting units that seem inactive, allowing them to be tuned further. We explore an alternative approach to this based on stochastic linear dynamics in parameters which allows to model non-stationarity and provides a mechanism to adaptively and stochastically drift the parameters towards the prior, implementing a mechanism of soft parameters reset.

Chat is not available.