Timezone: »
Recent advances in machine learning have been made possible by employing the backpropagation-of-error algorithm. Backprop enables the delivery of detailed error feedback across multiple layers of representation to adjust synaptic weights, allowing us to effectively train even very large networks. Whether or not the brain employs similar deep learning algorithms remains contentious; how it might do so remains a mystery. In particular, backprop uses the weights in the forward pass of the network to precisely compute error feedback in the backward pass. This way of computing errors across multiple layers is fundamentally at odds with what we know about the local computations of brains. We will describe new proposals for biologically motivated learning algorithms that are as effective as backpropagation without requiring weight transport.
Author Information
Timothy Lillicrap (DeepMind & UCL)
More from the Same Authors
-
2021 Spotlight: The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning »
Shahab Bakhtiari · Patrick Mineault · Timothy Lillicrap · Christopher Pack · Blake Richards -
2022 : Evaluating Long-Term Memory in 3D Mazes »
Jurgis Pašukonis · Timothy Lillicrap · Danijar Hafner -
2023 Poster: AndroidInTheWild: A Large-Scale Dataset For Android Device Control »
Christopher Rawles · Alice Li · Oriana Riva · Daniel Rodriguez · Timothy Lillicrap -
2022 Poster: Large-Scale Retrieval for Reinforcement Learning »
Peter Humphreys · Arthur Guez · Olivier Tieleman · Laurent Sifre · Theophane Weber · Timothy Lillicrap -
2022 Poster: Intra-agent speech permits zero-shot task acquisition »
Chen Yan · Federico Carnevale · Petko I Georgiev · Adam Santoro · Aurelia Guy · Alistair Muldal · Chia-Chun Hung · Joshua Abramson · Timothy Lillicrap · Gregory Wayne -
2022 Poster: On the Stability and Scalability of Node Perturbation Learning »
Naoki Hiratani · Yash Mehta · Timothy Lillicrap · Peter E Latham -
2021 Poster: The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning »
Shahab Bakhtiari · Patrick Mineault · Timothy Lillicrap · Christopher Pack · Blake Richards -
2021 Poster: Towards Biologically Plausible Convolutional Networks »
Roman Pogodin · Yash Mehta · Timothy Lillicrap · Peter E Latham -
2020 Poster: A meta-learning approach to (re)discover plasticity rules that carve a desired function into a neural network »
Basile Confavreux · Friedemann Zenke · Everton Agnes · Timothy Lillicrap · Tim Vogels -
2020 Spotlight: A meta-learning approach to (re)discover plasticity rules that carve a desired function into a neural network »
Basile Confavreux · Friedemann Zenke · Everton Agnes · Timothy Lillicrap · Tim Vogels -
2020 Poster: Training Generative Adversarial Networks by Solving Ordinary Differential Equations »
Chongli Qin · Yan Wu · Jost Tobias Springenberg · Andy Brock · Jeff Donahue · Timothy Lillicrap · Pushmeet Kohli -
2020 Spotlight: Training Generative Adversarial Networks by Solving Ordinary Differential Equations »
Chongli Qin · Yan Wu · Jost Tobias Springenberg · Andy Brock · Jeff Donahue · Timothy Lillicrap · Pushmeet Kohli -
2019 : Panel Session: A new hope for neuroscience »
Yoshua Bengio · Blake Richards · Timothy Lillicrap · Ila Fiete · David Sussillo · Doina Precup · Konrad Kording · Surya Ganguli -
2019 : Panel Discussion »
Linda Smith · Josh Tenenbaum · Lisa Anne Hendricks · James McClelland · Timothy Lillicrap · Jesse Thomason · Jason Baldridge · Louis-Philippe Morency -
2019 : Timothy Lillicrap »
Timothy Lillicrap -
2019 Poster: Experience Replay for Continual Learning »
David Rolnick · Arun Ahuja · Jonathan Richard Schwarz · Timothy Lillicrap · Gregory Wayne -
2019 Poster: Deep Learning without Weight Transport »
Mohamed Akrout · Collin Wilson · Peter Humphreys · Timothy Lillicrap · Douglas Tweed -
2018 : Invited Talk 2 »
Timothy Lillicrap -
2018 Poster: Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures »
Sergey Bartunov · Adam Santoro · Blake Richards · Luke Marris · Geoffrey E Hinton · Timothy Lillicrap -
2018 Poster: Learning Attractor Dynamics for Generative Memory »
Yan Wu · Gregory Wayne · Karol Gregor · Timothy Lillicrap -
2018 Poster: Relational recurrent neural networks »
Adam Santoro · Ryan Faulkner · David Raposo · Jack Rae · Mike Chrzanowski · Theophane Weber · Daan Wierstra · Oriol Vinyals · Razvan Pascanu · Timothy Lillicrap -
2017 : Scalable RL and AlphaGo »
Timothy Lillicrap -
2017 : Panel on "What neural systems can teach us about building better machine learning systems" »
Timothy Lillicrap · James J DiCarlo · Christopher Rozell · Viren Jain · Nathan Kutz · William Gray Roncal · Bingni Brunton -
2017 : Backpropagation and deep learning in the brain »
Timothy Lillicrap -
2017 Poster: A simple neural network module for relational reasoning »
Adam Santoro · David Raposo · David Barrett · Mateusz Malinowski · Razvan Pascanu · Peter Battaglia · Timothy Lillicrap -
2017 Spotlight: A simple neural network module for relational reasoning »
Adam Santoro · David Raposo · David Barrett · Mateusz Malinowski · Razvan Pascanu · Peter Battaglia · Timothy Lillicrap -
2017 Poster: Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning »
Shixiang (Shane) Gu · Timothy Lillicrap · Richard Turner · Zoubin Ghahramani · Bernhard Schölkopf · Sergey Levine -
2016 : Tim Lillicrap »
Timothy Lillicrap -
2016 Poster: Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes »
Jack Rae · Jonathan J Hunt · Ivo Danihelka · Tim Harley · Andrew Senior · Gregory Wayne · Alex Graves · Timothy Lillicrap -
2016 Poster: Matching Networks for One Shot Learning »
Oriol Vinyals · Charles Blundell · Timothy Lillicrap · koray kavukcuoglu · Daan Wierstra -
2015 Poster: Learning Continuous Control Policies by Stochastic Value Gradients »
Nicolas Heess · Gregory Wayne · David Silver · Timothy Lillicrap · Tom Erez · Yuval Tassa