Timezone: »

 
Poster
Retrospective Adversarial Replay for Continual Learning
Lilly Kumari · Shengjie Wang · Tianyi Zhou · Jeff A Bilmes

Tue Nov 29 09:00 AM -- 11:00 AM (PST) @ Hall J #121

Continual learning is an emerging research challenge in machine learning that addresses the problem where models quickly fit the most recently trained-on data but suffer from catastrophic forgetting of previous data due to distribution shifts --- it does this by maintaining a small historical replay buffer in replay-based methods. To avoid these problems, this paper proposes a method, ``Retrospective Adversarial Replay (RAR)'', that synthesizes adversarial samples near the forgetting boundary. RAR perturbs a buffered sample towards its nearest neighbor drawn from the current task in a latent representation space. By replaying such samples, we are able to refine the boundary between previous and current tasks, hence combating forgetting and reducing bias towards the current task. To mitigate the severity of a small replay buffer, we develop a novel MixUp-based strategy to increase replay variation by replaying mixed augmentations. Combined with RAR, this achieves a holistic framework that helps to alleviate catastrophic forgetting. We show that this excels on broadly-used benchmarks and outperforms other continual learning baselines especially when only a small buffer is available. We conduct a thorough ablation study over each key component as well as a hyperparameter sensitivity analysis to demonstrate the effectiveness and robustness of RAR.

Author Information

Lilly Kumari (University of Washington, Seattle)
Shengjie Wang (University of Washington)
Tianyi Zhou (University of Washington, Seattle)

Tianyi Zhou is a Ph.D. student in Computer Science at University of Washington and a member of MELODI lab led by Prof. Jeff A. Bilmes. He will be joining University of Maryland, College Park as a tenure-track assistant professor at the Department of Computer Science and affiliated with UMIACS in 2022. His research interests are in machine learning, optimization, and natural language processing. He has published ~60 papers at NeurIPS, ICML, ICLR, AISTATS, EMNLP, NAACL, COLING, KDD, ICDM, AAAI, IJCAI, ISIT, Machine Learning (Springer), IEEE TIP/TNNLS/TKDE, etc. He is the recipient of the Best Student Paper Award at ICDM 2013 and the 2020 IEEE TCSC Most Influential Paper Award.

Jeff A Bilmes (University of Washington, Seattle)

More from the Same Authors