Skip to yearly menu bar Skip to main content


Poster

SAFE: Slow and Fast Parameter-Efficient Tuning for Continual Learning with Pre-Trained Models

Linglan Zhao · Xuerui Zhang · Ke Yan · Shouhong Ding · Weiran Huang

East Exhibit Hall A-C #3309
[ ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Continual learning aims to incrementally acquire new concepts in data streams while resisting forgetting previous knowledge.With the rise of powerful pre-trained models (PTMs), there is a growing interest of training incremental learning systems using these foundation models, rather than learning from scratch. Existing works often view PTMs as a strong initial point and directly apply parameter-efficient tuning (PET) in the first session for adapting to downstream tasks.In the following sessions, most methods freeze model parameters for tackling forgetting issues. However, applying PET directly to downstream data cannot fully explore the inherent knowledge in PTMs.Additionally, freezing the parameters in incremental sessions hinders models' plasticity to novel concepts not covered in the first session. To solve the above issues, we propose a Slow And Fast parameter-Efficient tuning (SAFE) framework.In particular, to inherit general knowledge from foundation models, we include a transfer loss function by measuring the correlation between the PTM and the PET-applied model.After calibrating in the first session, the slow efficient tuning parameters can capture more informative features, improving generalization to incoming classes.Moreover, to further incorporate novel concepts, we strike a balance between stability and plasticity by fixing slow efficient tuning parameters and continuously updating the fast ones.Specifically, a cross-classification loss with feature alignment is proposed to circumvent catastrophic forgetting.During inference, we introduce an entropy-based aggregation strategy to dynamically utilize the complementarity in the slow and fast learners.Extensive experiments on seven benchmark datasets verify the effectiveness of our method by significantly surpassing the state-of-the-art.

Live content is unavailable. Log in and register to view live content