Timezone: »
To survive, animals must adapt synaptic weights based on external stimuli and rewards. And they must do so using local, biologically plausible, learning rules -- a highly nontrivial constraint. One possible approach is to perturb neural activity (or use intrinsic, ongoing noise to perturb it), determine whether performance increases or decreases, and use that information to adjust the weights. This algorithm -- known as node perturbation -- has been shown to work on simple problems, but little is known about either its stability or its scalability with respect to network size. We investigate these issues both analytically, in deep linear networks, and numerically, in deep nonlinear ones.We show analytically that in deep linear networks with one hidden layer, both learning time and performance depend very weakly on hidden layer size. However, unlike stochastic gradient descent, when there is model mismatch between the student and teacher networks, node perturbation is always unstable. The instability is triggered by weight diffusion, which eventually leads to very large weights. This instability can be suppressed by weight normalization, at the cost of bias in the learning rule. We confirm numerically that a similar instability, and to a lesser extent scalability, exist in deep nonlinear networks trained on both a motor control task and image classification tasks. Our study highlights the limitations and potential of node perturbation as a biologically plausible learning rule in the brain.
Author Information
Naoki Hiratani (Harvard University)
Yash Mehta (HHMI Janelia Research Campus)
Hi! I’m currently a research engineer working on challenging neural architecture search research under the supervision of Prof **Frank Hutter** (ELLIS Fellow). Previously, I was a researcher at the *Gatsby Computational Neuroscience Unit* at UCL, where I was working on evaluating biologically plausible perturbation-based learning algorithms to train deep networks under the guidance of **Prof Peter Latham** (Gatsby) and **Tim Lillicrap** (DeepMind). In the past, I’ve also worked on deep learning-based personality detection from text with **Prof Erik Cambria** (NTU Singapore). I thoroughly enjoy coding and working on hard algorithmic problems.
Timothy Lillicrap (DeepMind & UCL)
Peter E Latham (Gatsby Unit, UCL)
More from the Same Authors
-
2021 Spotlight: The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning »
Shahab Bakhtiari · Patrick Mineault · Timothy Lillicrap · Christopher Pack · Blake Richards -
2022 : Evaluating Long-Term Memory in 3D Mazes »
Jurgis Pašukonis · Timothy Lillicrap · Danijar Hafner -
2022 Poster: Large-Scale Retrieval for Reinforcement Learning »
Peter Humphreys · Arthur Guez · Olivier Tieleman · Laurent Sifre · Theophane Weber · Timothy Lillicrap -
2022 Poster: Intra-agent speech permits zero-shot task acquisition »
Chen Yan · Federico Carnevale · Petko I Georgiev · Adam Santoro · Aurelia Guy · Alistair Muldal · Chia-Chun Hung · Joshua Abramson · Timothy Lillicrap · Gregory Wayne -
2021 Poster: The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning »
Shahab Bakhtiari · Patrick Mineault · Timothy Lillicrap · Christopher Pack · Blake Richards -
2021 Poster: Powerpropagation: A sparsity inducing weight reparameterisation »
Jonathan Richard Schwarz · Siddhant Jayakumar · Razvan Pascanu · Peter E Latham · Yee Teh -
2021 Poster: Towards Biologically Plausible Convolutional Networks »
Roman Pogodin · Yash Mehta · Timothy Lillicrap · Peter E Latham -
2020 Poster: Kernelized information bottleneck leads to biologically plausible 3-factor Hebbian learning in deep networks »
Roman Pogodin · Peter E Latham -
2020 Poster: A meta-learning approach to (re)discover plasticity rules that carve a desired function into a neural network »
Basile Confavreux · Friedemann Zenke · Everton Agnes · Timothy Lillicrap · Tim Vogels -
2020 Spotlight: A meta-learning approach to (re)discover plasticity rules that carve a desired function into a neural network »
Basile Confavreux · Friedemann Zenke · Everton Agnes · Timothy Lillicrap · Tim Vogels -
2020 Poster: Training Generative Adversarial Networks by Solving Ordinary Differential Equations »
Chongli Qin · Yan Wu · Jost Tobias Springenberg · Andy Brock · Jeff Donahue · Timothy Lillicrap · Pushmeet Kohli -
2020 Spotlight: Training Generative Adversarial Networks by Solving Ordinary Differential Equations »
Chongli Qin · Yan Wu · Jost Tobias Springenberg · Andy Brock · Jeff Donahue · Timothy Lillicrap · Pushmeet Kohli -
2019 : Panel Session: A new hope for neuroscience »
Yoshua Bengio · Blake Richards · Timothy Lillicrap · Ila Fiete · David Sussillo · Doina Precup · Konrad Kording · Surya Ganguli -
2019 : Invited Talk: Deep learning without weight transport »
Timothy Lillicrap -
2019 : Panel Discussion »
Linda Smith · Josh Tenenbaum · Lisa Anne Hendricks · James McClelland · Timothy Lillicrap · Jesse Thomason · Jason Baldridge · Louis-Philippe Morency -
2019 : Timothy Lillicrap »
Timothy Lillicrap -
2019 Poster: Experience Replay for Continual Learning »
David Rolnick · Arun Ahuja · Jonathan Richard Schwarz · Timothy Lillicrap · Gregory Wayne -
2019 Poster: Deep Learning without Weight Transport »
Mohamed Akrout · Collin Wilson · Peter Humphreys · Timothy Lillicrap · Douglas Tweed -
2018 : Invited Talk 2 »
Timothy Lillicrap -
2018 Poster: Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures »
Sergey Bartunov · Adam Santoro · Blake Richards · Luke Marris · Geoffrey E Hinton · Timothy Lillicrap -
2018 Poster: Learning Attractor Dynamics for Generative Memory »
Yan Wu · Gregory Wayne · Karol Gregor · Timothy Lillicrap -
2018 Poster: Relational recurrent neural networks »
Adam Santoro · Ryan Faulkner · David Raposo · Jack Rae · Mike Chrzanowski · Theophane Weber · Daan Wierstra · Oriol Vinyals · Razvan Pascanu · Timothy Lillicrap -
2017 : Scalable RL and AlphaGo »
Timothy Lillicrap -
2017 : Panel on "What neural systems can teach us about building better machine learning systems" »
Timothy Lillicrap · James J DiCarlo · Christopher Rozell · Viren Jain · Nathan Kutz · William Gray Roncal · Bingni Brunton -
2017 : Backpropagation and deep learning in the brain »
Timothy Lillicrap -
2017 Poster: A simple neural network module for relational reasoning »
Adam Santoro · David Raposo · David Barrett · Mateusz Malinowski · Razvan Pascanu · Peter Battaglia · Timothy Lillicrap -
2017 Spotlight: A simple neural network module for relational reasoning »
Adam Santoro · David Raposo · David Barrett · Mateusz Malinowski · Razvan Pascanu · Peter Battaglia · Timothy Lillicrap -
2017 Poster: Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning »
Shixiang (Shane) Gu · Timothy Lillicrap · Richard Turner · Zoubin Ghahramani · Bernhard Schölkopf · Sergey Levine -
2016 : Tim Lillicrap »
Timothy Lillicrap -
2016 Poster: Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes »
Jack Rae · Jonathan J Hunt · Ivo Danihelka · Tim Harley · Andrew Senior · Gregory Wayne · Alex Graves · Timothy Lillicrap -
2016 Poster: Matching Networks for One Shot Learning »
Oriol Vinyals · Charles Blundell · Timothy Lillicrap · koray kavukcuoglu · Daan Wierstra -
2015 Poster: Learning Continuous Control Policies by Stochastic Value Gradients »
Nicolas Heess · Gregory Wayne · David Silver · Timothy Lillicrap · Tom Erez · Yuval Tassa -
2013 Poster: Demixing odors - fast inference in olfaction »
Agnieszka Grabska-Barwinska · Jeff Beck · Alexandre Pouget · Peter E Latham -
2013 Spotlight: Demixing odors - fast inference in olfaction »
Agnieszka Grabska-Barwinska · Jeff Beck · Alexandre Pouget · Peter E Latham -
2011 Poster: How biased are maximum entropy models? »
Jakob H Macke · Iain Murray · Peter E Latham -
2007 Oral: Neural characterization in partially observed populations of spiking neurons »
Jonathan W Pillow · Peter E Latham -
2007 Poster: Neural characterization in partially observed populations of spiking neurons »
Jonathan W Pillow · Peter E Latham