Timezone: »

Organizing recurrent network dynamics by task-computation to enable continual learning
Lea Duncker · Laura N Driscoll · Krishna V Shenoy · Maneesh Sahani · David Sussillo

Wed Dec 09 09:00 PM -- 11:00 PM (PST) @ Poster Session 4 #1209

Biological systems face dynamic environments that require continual learning. It is not well understood how these systems balance the tension between flexibility for learning and robustness for memory of previous behaviors. Continual learning without catastrophic interference also remains a challenging problem in machine learning. Here, we develop a novel learning rule designed to minimize interference between sequentially learned tasks in recurrent networks. Our learning rule preserves network dynamics within activity-defined subspaces used for previously learned tasks. It encourages dynamics associated with new tasks that might otherwise interfere to instead explore orthogonal subspaces, and it allows for reuse of previously established dynamical motifs where possible. Employing a set of tasks used in neuroscience, we demonstrate that our approach successfully eliminates catastrophic interference and offers a substantial improvement over previous continual learning algorithms. Using dynamical systems analysis, we show that networks trained using our approach can reuse similar dynamical structures across similar tasks. This possibility for shared computation allows for faster learning during sequential training. Finally, we identify organizational differences that emerge when training tasks sequentially versus simultaneously.

Author Information

Lea Duncker (Gatsby Unit, UCL)
Laura N Driscoll (Stanford)
Krishna V Shenoy (Stanford University)
Maneesh Sahani (Gatsby Unit, UCL)
David Sussillo (Stanford University)

More from the Same Authors