Timezone: »
Continual learning (CL) has been developed to learn new tasks sequentially and perform knowledge transfer from the old tasks to the new ones without forgetting, which is well known as catastrophic forgetting. While recent structure-based learning methods show the capability of alleviating the forgetting problem, these methods require a complex learning process to gradually grow-and-prune of a full-size network for each task, which is inefficient. To address this problem and enable efficient network expansion for new tasks, to the best of our knowledge, we are the first to develop a learnable sparse growth (LSG) method, which explicitly optimizes the model growth to only select important and necessary channels for growing. Building on the LSG, we then propose CL-LSG, a novel end-to-end CL framework to grow the model for each new task dynamically and sparsely. Different from all previous structure-based CL methods that start from and then prune (i.e., two-step) a full-size network, our framework starts from a compact seed network with a much smaller size and grows to the necessary model size (i.e., one-step) for each task, which eliminates the need of additional pruning in previous structure-based growing methods.
Author Information
Li Yang (Arizona State University)
Sen Lin (Ohio State University, Columbus)
Junshan Zhang (University of California, Davis)
Deliang Fan (Arizona State University)
More from the Same Authors
-
2023 Poster: Slimmed Asymmetrical Contrastive Learning and Cross Distillation for Lightweight Model Training »
Jian Meng · Li Yang · Kyungmin Lee · Jinwoo Shin · Deliang Fan · Jae-sun Seo -
2022 Spotlight: Lightning Talks 3B-4 »
Guanghu Yuan · Yijing Liu · Li Yang · Yongri Piao · Zekang Zhang · Yaxin Xiao · Lin Chen · Hong Chang · Fajie Yuan · Guangyu Gao · Hong Chang · Qinxian Liu · Zhixiang Wei · Qingqing Ye · Chenyang Lu · Jian Meng · Haibo Hu · Xin Jin · Yudong Li · Miao Zhang · Zhiyuan Fang · Jae-sun Seo · Bingpeng MA · Jian-Wei Zhang · Shiguang Shan · Haozhe Feng · Huaian Chen · Deliang Fan · Huadi Zheng · Jianbo Jiao · Huchuan Lu · Beibei Kong · Miao Zheng · Chengfang Fang · Shujie Li · Zhongwei Wang · Yunchao Wei · Xilin Chen · Jie Shi · Kai Chen · Zihan Zhou · Lei Chen · Yi Jin · Wei Chen · Min Yang · Chenyun YU · Bo Hu · Zang Li · Yu Xu · Xiaohu Qie -
2022 Spotlight: Get More at Once: Alternating Sparse Training with Gradient Correction »
Li Yang · Jian Meng · Jae-sun Seo · Deliang Fan -
2022 Poster: Beyond Not-Forgetting: Continual Learning with Backward Knowledge Transfer »
Sen Lin · Li Yang · Deliang Fan · Junshan Zhang -
2022 Poster: Get More at Once: Alternating Sparse Training with Gradient Correction »
Li Yang · Jian Meng · Jae-sun Seo · Deliang Fan