Timezone: »
Modularity is a compelling solution to continual learning (CL), the problem of modeling sequences of related tasks. Learning and then composing modules to solve different tasks provides an abstraction to address the principal challenges of CL including catastrophic forgetting, backward and forward transfer across tasks, and sub-linear model growth. We introduce local module composition (LMC), an approach to modular CL where each module is provided a local structural component that estimates a module’s relevance to the input. Dynamic module composition is performed layer-wise based on local relevance scores. We demonstrate that agnosticity to task identities (IDs) arises from (local) structural learning that is module-specific as opposed to the task- and/or model-specific as in previous works, making LMC applicable to more CL settings compared to previous works. In addition, LMC also tracks statistics about the input distribution and adds new modules when outlier samples are detected. In the first set of experiments, LMC performs favorably compared to existing methods on the recent Continual Transfer-learning Benchmark without requiring task identities. In another study, we show that the locality of structural learning allows LMC to interpolate to related but unseen tasks (OOD), as well as to compose modular networks trained independently on different task sequences into a third modular network without any fine-tuning. Finally, in search for limitations of LMC we study it on more challenging sequences of 30 and 100 tasks, demonstrating that local module selection becomes much more challenging in presence of a large number of candidate modules. In this setting best performing LMC spawns much fewer modules compared to an oracle based baseline, however, it reaches a lower overall accuracy. The codebase is available under https://github.com/oleksost/LMC.
Author Information
Oleksiy Ostapenko (University of Montreal, Mila)
Pau Rodriguez (Element AI)
Massimo Caccia (MILA)
Laurent Charlin (MILA / U.Montreal)
More from the Same Authors
-
2022 : Attention for Compositional Modularity »
Oleksiy Ostapenko · Pau Rodriguez · Alexandre Lacoste · Laurent Charlin -
2022 : Exploring the Design Space of Generative Diffusion Processes for Sparse Graphs »
Pierre-André Noël · Pau Rodriguez -
2021 : Machine Learning for Combinatorial Optimization + Q&A »
Maxime Gasse · Simon Bowly · Chris Cameron · Quentin Cappart · Jonas Charfreitag · Laurent Charlin · Shipra Agrawal · Didier Chetelat · Justin Dumouchelle · Ambros Gleixner · Aleksandr Kazachkov · Elias Khalil · Pawel Lichocki · Andrea Lodi · Miles Lubin · Christopher Morris · Dimitri Papageorgiou · Augustin Parjadis · Sebastian Pokutta · Antoine Prouvost · Yuandong Tian · Lara Scavuzzo · Giulia Zarpellon -
2021 Poster: Pretraining Representations for Data-Efficient Reinforcement Learning »
Max Schwarzer · Nitarshan Rajkumar · Michael Noukhovitch · Ankesh Anand · Laurent Charlin · R Devon Hjelm · Philip Bachman · Aaron Courville -
2021 Poster: Learning where to learn: Gradient sparsity in meta and continual learning »
Johannes von Oswald · Dominic Zhao · Seijin Kobayashi · Simon Schug · Massimo Caccia · Nicolas Zucchet · João Sacramento -
2020 Poster: Online Fast Adaptation and Knowledge Accumulation (OSAKA): a New Approach to Continual Learning »
Massimo Caccia · Pau Rodriguez · Oleksiy Ostapenko · Fabrice Normandin · Min Lin · Lucas Page-Caccia · Issam Hadj Laradji · Irina Rish · Alexandre Lacoste · David Vázquez · Laurent Charlin -
2020 Poster: Synbols: Probing Learning Algorithms with Synthetic Datasets »
Alexandre Lacoste · Pau Rodríguez López · Frederic Branchaud-Charron · Parmida Atighehchian · Massimo Caccia · Issam Hadj Laradji · Alexandre Drouin · Matthew Craddock · Laurent Charlin · David Vázquez -
2020 Session: Orals & Spotlights Track 16: Continual/Meta/Misc Learning »
Laurent Charlin · Cedric Archambeau -
2019 Poster: Online Continual Learning with Maximal Interfered Retrieval »
Rahaf Aljundi · Eugene Belilovsky · Tinne Tuytelaars · Laurent Charlin · Massimo Caccia · Min Lin · Lucas Page-Caccia -
2019 Poster: Exact Combinatorial Optimization with Graph Convolutional Neural Networks »
Maxime Gasse · Didier Chetelat · Nicola Ferroni · Laurent Charlin · Andrea Lodi -
2018 : Lunch & Posters »
Haytham Fayek · German Parisi · Brian Xu · Pramod Kaushik Mudrakarta · Sophie Cerf · Sarah Wassermann · Davit Soselia · Rahaf Aljundi · Mohamed Elhoseiny · Frantzeska Lavda · Kevin J Liang · Arslan Chaudhry · Sanmit Narvekar · Vincenzo Lomonaco · Wesley Chung · Michael Chang · Ying Zhao · Zsolt Kira · Pouya Bashivan · Banafsheh Rafiee · Oleksiy Ostapenko · Andrew Jones · Christos Kaplanis · Sinan Kalkan · Dan Teng · Xu He · Vincent Liu · Somjit Nath · Sungsoo Ahn · Ting Chen · Shenyang Huang · Yash Chandak · Nathan Sprague · Martin Schrimpf · Tony Kendall · Jonathan Richard Schwarz · Michael Li · Yunshu Du · Yen-Chang Hsu · Samira Abnar · Bo Wang -
2018 : Poster Session 1 (note there are numerous missing names here, all papers appear in all poster sessions) »
Akhilesh Gotmare · Kenneth Holstein · Jan Brabec · Michal Uricar · Kaleigh Clary · Cynthia Rudin · Sam Witty · Andrew Ross · Shayne O'Brien · Babak Esmaeili · Jessica Forde · Massimo Caccia · Ali Emami · Scott Jordan · Bronwyn Woods · D. Sculley · Rebekah Overdorf · Nicolas Le Roux · Peter Henderson · Brandon Yang · Tzu-Yu Liu · David Jensen · Niccolo Dalmasso · Weitang Liu · Paul Marc TRICHELAIR · Jun Ki Lee · Akanksha Atrey · Matt Groh · Yotam Hechtlinger · Emma Tosch -
2018 Poster: Towards Deep Conversational Recommendations »
Raymond Li · Samira Ebrahimi Kahou · Hannes Schulz · Vincent Michalski · Laurent Charlin · Chris Pal -
2014 Poster: Content-based recommendations with Poisson factorization »
Prem Gopalan · Laurent Charlin · David Blei -
2006 Poster: Automated Hierarchy Discovery for Planning in Partially Observable Domains »
Laurent Charlin · Pascal Poupart · Romy Shioda