Timezone: »
Current algorithms for deep learning probably cannot run in the brain because they rely on weight transport, where forward-path neurons transmit their synaptic weights to a feedback path, in a way that is likely impossible biologically. An algorithm called feedback alignment achieves deep learning without weight transport by using random feedback weights, but it performs poorly on hard visual-recognition tasks. Here we describe two mechanisms — a neural circuit called a weight mirror and a modification of an algorithm proposed by Kolen and Pollack in 1994 — both of which let the feedback path learn appropriate synaptic weights quickly and accurately even in large networks, without weight transport or complex wiring. Tested on the ImageNet visual-recognition task, these mechanisms outperform both feedback alignment and the newer sign-symmetry method, and nearly match backprop, the standard algorithm of deep learning, which uses weight transport.
Author Information
Mohamed Akrout (University of Toronto)
Collin Wilson (University of Toronto)
Peter Humphreys (Deepmind)
Timothy Lillicrap (DeepMind & UCL)
Douglas Tweed (University of Toronto)
More from the Same Authors
-
2020 Poster: A meta-learning approach to (re)discover plasticity rules that carve a desired function into a neural network »
Basile Confavreux · Friedemann Zenke · Everton Agnes · Timothy Lillicrap · Tim Vogels -
2020 Spotlight: A meta-learning approach to (re)discover plasticity rules that carve a desired function into a neural network »
Basile Confavreux · Friedemann Zenke · Everton Agnes · Timothy Lillicrap · Tim Vogels -
2020 Poster: Training Generative Adversarial Networks by Solving Ordinary Differential Equations »
Chongli Qin · Yan Wu · Jost Tobias Springenberg · Andy Brock · Jeff Donahue · Timothy Lillicrap · Pushmeet Kohli -
2020 Spotlight: Training Generative Adversarial Networks by Solving Ordinary Differential Equations »
Chongli Qin · Yan Wu · Jost Tobias Springenberg · Andy Brock · Jeff Donahue · Timothy Lillicrap · Pushmeet Kohli -
2019 Poster: Experience Replay for Continual Learning »
David Rolnick · Arun Ahuja · Jonathan Schwarz · Timothy Lillicrap · Gregory Wayne -
2018 Poster: Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures »
Sergey Bartunov · Adam Santoro · Blake Richards · Luke Marris · Geoffrey E Hinton · Timothy Lillicrap -
2018 Poster: Learning Attractor Dynamics for Generative Memory »
Yan Wu · Gregory Wayne · Karol Gregor · Timothy Lillicrap -
2018 Poster: Relational recurrent neural networks »
Adam Santoro · Ryan Faulkner · David Raposo · Jack Rae · Mike Chrzanowski · Theophane Weber · Daan Wierstra · Oriol Vinyals · Razvan Pascanu · Timothy Lillicrap -
2017 Poster: A simple neural network module for relational reasoning »
Adam Santoro · David Raposo · David Barrett · Mateusz Malinowski · Razvan Pascanu · Peter Battaglia · Timothy Lillicrap -
2017 Spotlight: A simple neural network module for relational reasoning »
Adam Santoro · David Raposo · David Barrett · Mateusz Malinowski · Razvan Pascanu · Peter Battaglia · Timothy Lillicrap -
2017 Poster: Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning »
Shixiang (Shane) Gu · Timothy Lillicrap · Richard Turner · Zoubin Ghahramani · Bernhard Schölkopf · Sergey Levine -
2016 Poster: Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes »
Jack Rae · Jonathan J Hunt · Ivo Danihelka · Tim Harley · Andrew Senior · Gregory Wayne · Alex Graves · Timothy Lillicrap -
2016 Poster: Matching Networks for One Shot Learning »
Oriol Vinyals · Charles Blundell · Timothy Lillicrap · koray kavukcuoglu · Daan Wierstra -
2015 Poster: Learning Continuous Control Policies by Stochastic Value Gradients »
Nicolas Heess · Gregory Wayne · David Silver · Timothy Lillicrap · Tom Erez · Yuval Tassa