Timezone: »

 
Achieving a Better Stability-Plasticity Trade-off via Auxiliary Networks in Continual Learning
Sanghwan Kim · Lorenzo Noci · Antonio Orvieto · Thomas Hofmann
Event URL: https://openreview.net/forum?id=LHzkFMv-dmV »

In contrast to the natural capabilities of humans to learn new tasks in a sequential fashion, neural networks are known to suffer from catastrophic forgetting, where the model's performances drop dramatically after being optimized for a new task. Since then, the continual learning community has proposed several solutions aiming to equip the neural network with the ability to learn the current task (plasticity) while still achieving high accuracy on the old tasks (stability). Despite remarkable improvements, the plasticity-stability trade-off is still far from being solved, and its underlying mechanism is poorly understood. In this work, we propose Auxiliary Network Continual Learning (ANCL), a new method that combines the continually learned model with an additional auxiliary network that is solely optimized on the new task. More concretely, the proposed framework materializes in a regularizer that naturally interpolates between plasticity and stability, surpassing strong baselines on CIFAR-100. By analyzing the solutions of several continual learning methods based on the so-called mode connectivity assumption, we propose a new hyperparamter's search technique which dynamically adjust the regularization parameter to achieve better stability-plasticity trade-off.

Author Information

Sanghwan Kim (ETHZ - ETH Zurich)
Lorenzo Noci (ETH Zürich)
Antonio Orvieto (ETH Zurich)

PhD Student at ETH Zurich. I’m interested in the design and analysis of optimization algorithms for deep learning. Interned at DeepMind, MILA, and Meta. All publications at http://orvi.altervista.org/ Looking for postdoc positions! :) antonio.orvieto@inf.ethz.ch

Thomas Hofmann (ETH Zurich)

More from the Same Authors