Timezone: »
Continual learning, the setting where a learning agent is faced with a never-ending stream of data, continues to be a great challenge for modern machine learning systems. In particular the online or "single-pass through the data" setting has gained attention recently as a natural setting that is difficult to tackle. Methods based on replay, either generative or from a stored memory, have been shown to be effective approaches for continual learning, matching or exceeding the state of the art in a number of standard benchmarks. These approaches typically rely on randomly selecting samples from the replay memory or from a generative model, which is suboptimal. In this work, we consider a controlled sampling of memories for replay. We retrieve the samples which are most interfered, i.e. whose prediction will be most negatively impacted by the foreseen parameters update. We show a formulation for this sampling criterion in both the generative replay and the experience replay setting, producing consistent gains in performance and greatly reduced forgetting. We release an implementation of our method at https://github.com/optimass/MaximallyInterferedRetrieval
Author Information
Rahaf Aljundi (KU Leuven, Belgium)
Eugene Belilovsky (Mila, University of Montreal)
Tinne Tuytelaars (KU Leuven)
Laurent Charlin (MILA / U.Montreal)
Massimo Caccia (MILA)
Min Lin (MILA)
Lucas Page-Caccia (McGill University)
More from the Same Authors
-
2020 Poster: Online Fast Adaptation and Knowledge Accumulation (OSAKA): a New Approach to Continual Learning »
Massimo Caccia · Pau Rodriguez · Oleksiy Ostapenko · Fabrice Normandin · Min Lin · Lucas Page-Caccia · Issam Hadj Laradji · Irina Rish · Alexandre Lacoste · David Vázquez · Laurent Charlin -
2020 Poster: Synbols: Probing Learning Algorithms with Synthetic Datasets »
Alexandre Lacoste · Pau Rodríguez López · Frederic Branchaud-Charron · Parmida Atighehchian · Massimo Caccia · Issam Hadj Laradji · Alexandre Drouin · Matthew Craddock · Laurent Charlin · David Vázquez -
2020 Session: Orals & Spotlights Track 16: Continual/Meta/Misc Learning »
Laurent Charlin · Cedric Archambeau -
2019 Poster: Gradient based sample selection for online continual learning »
Rahaf Aljundi · Min Lin · Baptiste Goujaud · Yoshua Bengio -
2019 Poster: Exact Combinatorial Optimization with Graph Convolutional Neural Networks »
Maxime Gasse · Didier Chetelat · Nicola Ferroni · Laurent Charlin · Andrea Lodi -
2018 Poster: Towards Deep Conversational Recommendations »
Raymond Li · Samira Ebrahimi Kahou · Hannes Schulz · Vincent Michalski · Laurent Charlin · Chris Pal -
2017 Poster: Pose Guided Person Image Generation »
Liqian Ma · Xu Jia · Qianru Sun · Bernt Schiele · Tinne Tuytelaars · Luc Van Gool -
2016 Poster: Dynamic Filter Networks »
Xu Jia · Bert De Brabandere · Tinne Tuytelaars · Luc V Gool -
2014 Poster: Content-based recommendations with Poisson factorization »
Prem Gopalan · Laurent Charlin · David Blei -
2006 Poster: Automated Hierarchy Discovery for Planning in Partially Observable Domains »
Laurent Charlin · Pascal Poupart · Romy Shioda