Timezone: »

Look-ahead Meta Learning for Continual Learning
Gunshi Gupta · Karmesh Yadav · Liam Paull

Wed Dec 09 06:15 AM -- 06:30 AM (PST) @ Orals & Spotlights: Continual/Meta/Misc Learning

The continual learning problem involves training models with limited capacity to perform well on a set of an unknown number of sequentially arriving tasks. While meta-learning shows great potential for reducing interference between old and new tasks, the current training procedures tend to be either slow or offline, and sensitive to many hyper-parameters. In this work, we propose Look-ahead MAML (La-MAML), a fast optimisation-based meta-learning algorithm for online-continual learning, aided by a small episodic memory. By incorporating the modulation of per-parameter learning rates in our meta-learning update, our approach also allows us to draw connections to and exploit prior work on hypergradients and meta-descent. This provides a more flexible and efficient way to mitigate catastrophic forgetting compared to conventional prior-based methods. La-MAML achieves performance superior to other replay-based, prior-based and meta-learning based approaches for continual learning on real-world visual classification benchmarks.

Author Information

Gunshi Gupta (University of montreal)
Karmesh Yadav (Carnegie)
Liam Paull (Université de Montréal)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors