Timezone: »

Flattening Sharpness for Dynamic Gradient Projection Memory Benefits Continual Learning
Danruo DENG · Guangyong Chen · Jianye Hao · Qiong Wang · Pheng-Ann Heng

Wed Dec 08 12:30 AM -- 02:00 AM (PST) @ None #None

The backpropagation networks are notably susceptible to catastrophic forgetting, where networks tend to forget previously learned skills upon learning new ones. To address such the 'sensitivity-stability' dilemma, most previous efforts have been contributed to minimizing the empirical risk with different parameter regularization terms and episodic memory, but rarely exploring the usages of the weight loss landscape. In this paper, we investigate the relationship between the weight loss landscape and sensitivity-stability in the continual learning scenario, based on which, we propose a novel method, Flattening Sharpness for Dynamic Gradient Projection Memory (FS-DGPM). In particular, we introduce a soft weight to represent the importance of each basis representing past tasks in GPM, which can be adaptively learned during the learning process, so that less important bases can be dynamically released to improve the sensitivity of new skill learning. We further introduce Flattening Sharpness (FS) to reduce the generalization gap by explicitly regulating the flatness of the weight loss landscape of all seen tasks. As demonstrated empirically, our proposed method consistently outperforms baselines with the superior ability to learn new skills while alleviating forgetting effectively.

Author Information

Danruo DENG (The Chinese University of Hong Kong)
Guangyong Chen (SIAT, CAS)
Jianye Hao (Tianjin University)
Qiong Wang (Shenzhen institue of advanced technology,Chinese academy of sciences)
Pheng-Ann Heng (The Chinese University of Hong Kong)

More from the Same Authors