Timezone: »
Current deep neural networks can achieve remarkable performance on a single task. However, when the deep neural network is continually trained on a sequence of tasks, it seems to gradually forget the previous learned knowledge. This phenomenon is referred to as catastrophic forgetting and motivates the field called lifelong learning. Recently, episodic memory based approaches such as GEM and A-GEM have shown remarkable performance. In this paper, we provide the first unified view of episodic memory based approaches from an optimization's perspective. This view leads to two improved schemes for episodic memory based lifelong learning, called MEGA-\rom{1} and MEGA-\rom{2}. MEGA-\rom{1} and MEGA-\rom{2} modulate the balance between old tasks and the new task by integrating the current gradient with the gradient computed on the episodic memory. Notably, we show that GEM and A-GEM are degenerate cases of MEGA-\rom{1} and MEGA-\rom{2} which consistently put the same emphasis on the current task, regardless of how the loss changes over time. Our proposed schemes address this issue by using novel loss-balancing updating rules, which drastically improve the performance over GEM and A-GEM. Extensive experimental results show that the proposed schemes significantly advance the state-of-the-art on four commonly used lifelong learning benchmarks, reducing the error by up to 18%.
Author Information
Yunhui Guo (University of California, San Diego)
Mingrui Liu (Boston University)
Tianbao Yang (The University of Iowa)
Tajana S Rosing (UCSD)
Tajana Šimunić Rosing is a Professor, a holder of the Fratamico Endowed Chair, IEEE Fellow, and a director of System Energy Efficiency Lab at UCSD. Her research interests are in energy efficient computing, cyber-physical and distributed systems. She is leading a number of projects, including efforts funded by DARPA/SRC JUMP CRISP program, with focus on design of accelerators for analysis of big data, a project focused on developing AI systems in support of healthy living, SRC funded project on IoT system reliability and maintainability, and NSF funded project on design and calibration of air-quality sensors and others. She recently headed the effort on SmartCities that was a part of DARPA and industry funded TerraSwarm center. Tajana led the energy efficient datacenters theme in MuSyC center, and a number of large projects funded by both industry and government focused on power and thermal management. Tajana’s work on proactive thermal management and ambient-driven thermal modeling was instrumental in laying the groundwork in this field, and has since resulted in a number of industrial implementations of these ideas. Her research on event driven dynamic power management laid the mathematical foundations for the engineering problem, devised a globally optimal solution and more importantly defined the framework for future researchers to approach these kinds of problems in embedded system design. From 1998 until 2005 she was a full time research scientist at HP Labs while also leading research efforts at Stanford University. She finished her PhD in EE in 2001 at Stanford, concurrently with finishing her Masters in Engineering Management. Her PhD topic was dynamic management of power consumption. Prior to pursuing the PhD, she worked as a senior design engineer at Altera Corporation. She has served at a number of Technical Paper Committees, including being an Associate Editor of IEEE Transactions on Mobile Computing, an Associate Editor of IEEE Transactions on Circuits and Systems, and a Guest Editor for the Special Issue of IEEE Transactions on VLSI.
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Spotlight: Improved Schemes for Episodic Memory-based Lifelong Learning »
Thu. Dec 10th 03:00 -- 03:10 AM Room Orals & Spotlights: Graph/Meta Learning/Software
More from the Same Authors
-
2021 : Practice-Consistent Analysis of Adam-Style Methods »
Zhishuai Guo · Yi Xu · Wotao Yin · Rong Jin · Tianbao Yang -
2021 : A Stochastic Momentum Method for Min-max Bilevel Optimization »
Quanqi Hu · Bokun Wang · Tianbao Yang -
2021 : A Unified DRO View of Multi-class Loss Functions with top-N Consistency »
Dixian Zhu · Tianbao Yang -
2021 : Deep AUC Maximization for Medical Image Classification: Challenges and Opportunities »
Tianbao Yang -
2022 Spotlight: Multi-block-Single-probe Variance Reduced Estimator for Coupled Compositional Optimization »
Wei Jiang · Gang Li · Yibo Wang · Lijun Zhang · Tianbao Yang -
2022 Spotlight: Lightning Talks 6B-1 »
Yushun Zhang · Duc Nguyen · Jiancong Xiao · Wei Jiang · Yaohua Wang · Yilun Xu · Zhen LI · Anderson Ye Zhang · Ziming Liu · Fangyi Zhang · Gilles Stoltz · Congliang Chen · Gang Li · Yanbo Fan · Ruoyu Sun · Naichen Shi · Yibo Wang · Ming Lin · Max Tegmark · Lijun Zhang · Jue Wang · Ruoyu Sun · Tommi Jaakkola · Senzhang Wang · Zhi-Quan Luo · Xiuyu Sun · Zhi-Quan Luo · Tianbao Yang · Rong Jin -
2022 Spotlight: Lightning Talks 4A-2 »
Barakeel Fanseu Kamhoua · Hualin Zhang · Taiki Miyagawa · Tomoya Murata · Xin Lyu · Yan Dai · Elena Grigorescu · Zhipeng Tu · Lijun Zhang · Taiji Suzuki · Wei Jiang · Haipeng Luo · Lin Zhang · Xi Wang · Young-San Lin · Huan Xiong · Liyu Chen · Bin Gu · Jinfeng Yi · Yongqiang Chen · Sandeep Silwal · Yiguang Hong · Maoyuan Song · Lei Wang · Tianbao Yang · Han Yang · MA Kaili · Samson Zhou · Deming Yuan · Bo Han · Guodong Shi · Bo Li · James Cheng -
2022 Spotlight: A Communication-Efficient Distributed Gradient Clipping Algorithm for Training Deep Neural Networks »
Mingrui Liu · Zhenxun Zhuang · Yunwen Lei · Chunyang Liao -
2022 Spotlight: Smoothed Online Convex Optimization Based on Discounted-Normal-Predictor »
Lijun Zhang · Wei Jiang · Jinfeng Yi · Tianbao Yang -
2022 Spotlight: Will Bilevel Optimizers Benefit from Loops »
Kaiyi Ji · Mingrui Liu · Yingbin Liang · Lei Ying -
2022 Poster: A Communication-Efficient Distributed Gradient Clipping Algorithm for Training Deep Neural Networks »
Mingrui Liu · Zhenxun Zhuang · Yunwen Lei · Chunyang Liao -
2022 Poster: Robustness to Unbounded Smoothness of Generalized SignSGD »
Michael Crawshaw · Mingrui Liu · Francesco Orabona · Wei Zhang · Zhenxun Zhuang -
2022 Poster: Multi-block Min-max Bilevel Optimization with Applications in Multi-task Deep AUC Maximization »
Quanqi Hu · YONGJIAN ZHONG · Tianbao Yang -
2022 Poster: Large-scale Optimization of Partial AUC in a Range of False Positive Rates »
Yao Yao · Qihang Lin · Tianbao Yang -
2022 Poster: Smoothed Online Convex Optimization Based on Discounted-Normal-Predictor »
Lijun Zhang · Wei Jiang · Jinfeng Yi · Tianbao Yang -
2022 Poster: Multi-block-Single-probe Variance Reduced Estimator for Coupled Compositional Optimization »
Wei Jiang · Gang Li · Yibo Wang · Lijun Zhang · Tianbao Yang -
2022 Poster: Will Bilevel Optimizers Benefit from Loops »
Kaiyi Ji · Mingrui Liu · Yingbin Liang · Lei Ying -
2021 Poster: Simple Stochastic and Online Gradient Descent Algorithms for Pairwise Learning »
ZHENHUAN YANG · Yunwen Lei · Puyu Wang · Tianbao Yang · Yiming Ying -
2021 Poster: Revisiting Smoothed Online Learning »
Lijun Zhang · Wei Jiang · Shiyin Lu · Tianbao Yang -
2021 Poster: Generalization Guarantee of SGD for Pairwise Learning »
Yunwen Lei · Mingrui Liu · Yiming Ying -
2021 Poster: Stochastic Optimization of Areas Under Precision-Recall Curves with Provable Convergence »
Qi Qi · Youzhi Luo · Zhao Xu · Shuiwang Ji · Tianbao Yang -
2021 Poster: Online Convex Optimization with Continuous Switching Constraint »
Guanghui Wang · Yuanyu Wan · Tianbao Yang · Lijun Zhang -
2021 Poster: An Online Method for A Class of Distributionally Robust Optimization with Non-convex Objectives »
Qi Qi · Zhishuai Guo · Yi Xu · Rong Jin · Tianbao Yang -
2020 Poster: A Decentralized Parallel Algorithm for Training Generative Adversarial Nets »
Mingrui Liu · Wei Zhang · Youssef Mroueh · Xiaodong Cui · Jarret Ross · Tianbao Yang · Payel Das -
2020 Poster: Optimal Epoch Stochastic Gradient Descent Ascent Methods for Min-Max Optimization »
Yan Yan · Yi Xu · Qihang Lin · Wei Liu · Tianbao Yang -
2019 Poster: Non-asymptotic Analysis of Stochastic Methods for Non-Smooth Non-Convex Regularized Problems »
Yi Xu · Rong Jin · Tianbao Yang -
2019 Poster: Stagewise Training Accelerates Convergence of Testing Error Over SGD »
Zhuoning Yuan · Yan Yan · Rong Jin · Tianbao Yang -
2018 : Poster spotlight »
Tianbao Yang · Pavel Dvurechenskii · Panayotis Mertikopoulos · Hugo Berard -
2018 Poster: First-order Stochastic Algorithms for Escaping From Saddle Points in Almost Linear Time »
Yi Xu · Rong Jin · Tianbao Yang -
2018 Poster: Adaptive Negative Curvature Descent with Applications in Non-convex Optimization »
Mingrui Liu · Zhe Li · Xiaoyu Wang · Jinfeng Yi · Tianbao Yang -
2018 Poster: Faster Online Learning of Optimal Threshold for Consistent F-measure Optimization »
Xiaoxuan Zhang · Mingrui Liu · Xun Zhou · Tianbao Yang -
2018 Poster: Fast Rates of ERM and Stochastic Approximation: Adaptive to Error Bound Conditions »
Mingrui Liu · Xiaoxuan Zhang · Lijun Zhang · Rong Jin · Tianbao Yang -
2017 Poster: ADMM without a Fixed Penalty Parameter: Faster Convergence with New Adaptive Penalization »
Yi Xu · Mingrui Liu · Qihang Lin · Tianbao Yang -
2017 Poster: Improved Dynamic Regret for Non-degenerate Functions »
Lijun Zhang · Tianbao Yang · Jinfeng Yi · Rong Jin · Zhi-Hua Zhou -
2017 Poster: Adaptive Accelerated Gradient Converging Method under H\"{o}lderian Error Bound Condition »
Mingrui Liu · Tianbao Yang -
2017 Poster: Adaptive SVRG Methods under Error Bound Conditions with Unknown Growth Parameter »
Yi Xu · Qihang Lin · Tianbao Yang -
2016 Poster: Homotopy Smoothing for Non-Smooth Problems with Lower Complexity than $O(1/\epsilon)$ »
Yi Xu · Yan Yan · Qihang Lin · Tianbao Yang -
2016 Poster: Improved Dropout for Shallow and Deep Learning »
Zhe Li · Boqing Gong · Tianbao Yang