Timezone: »
The search for biologically faithful synaptic plasticity rules has resulted in a large body of models. They are usually inspired by -- and fitted to -- experimental data, but they rarely produce neural dynamics that serve complex functions. These failures suggest that current plasticity models are still under-constrained by existing data. Here, we present an alternative approach that uses meta-learning to discover plausible synaptic plasticity rules. Instead of experimental data, the rules are constrained by the functions they implement and the structure they are meant to produce. Briefly, we parameterize synaptic plasticity rules by a Volterra expansion and then use supervised learning methods (gradient descent or evolutionary strategies) to minimize a problem-dependent loss function that quantifies how effectively a candidate plasticity rule transforms an initially random network into one with the desired function. We first validate our approach by re-discovering previously described plasticity rules, starting at the single-neuron level and ``Oja’s rule'', a simple Hebbian plasticity rule that captures the direction of most variability of inputs to a neuron (i.e., the first principal component). We expand the problem to the network level and ask the framework to find Oja’s rule together with an anti-Hebbian rule such that an initially random two-layer firing-rate network will recover several principal components of the input space after learning. Next, we move to networks of integrate-and-fire neurons with plastic inhibitory afferents. We train for rules that achieve a target firing rate by countering tuned excitation. Our algorithm discovers a specific subset of the manifold of rules that can solve this task. Our work is a proof of principle of an automated and unbiased approach to unveil synaptic plasticity rules that obey biological constraints and can solve complex functions.
Author Information
Basile Confavreux (IST Austria)
Friedemann Zenke (Friedrich Miescher Institute)
Everton Agnes (University of Basel)
Timothy Lillicrap (DeepMind & UCL)
Tim Vogels (Institute of Science and Technology)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Poster: A meta-learning approach to (re)discover plasticity rules that carve a desired function into a neural network »
Thu. Dec 10th 05:00 -- 07:00 PM Room Poster Session 5 #1578
More from the Same Authors
-
2021 Spotlight: The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning »
Shahab Bakhtiari · Patrick Mineault · Timothy Lillicrap · Christopher Pack · Blake Richards -
2022 : Evaluating Long-Term Memory in 3D Mazes »
Jurgis Pašukonis · Timothy Lillicrap · Danijar Hafner -
2022 Poster: Large-Scale Retrieval for Reinforcement Learning »
Peter Humphreys · Arthur Guez · Olivier Tieleman · Laurent Sifre · Theophane Weber · Timothy Lillicrap -
2022 Poster: Intra-agent speech permits zero-shot task acquisition »
Chen Yan · Federico Carnevale · Petko I Georgiev · Adam Santoro · Aurelia Guy · Alistair Muldal · Chia-Chun Hung · Joshua Abramson · Timothy Lillicrap · Gregory Wayne -
2022 Poster: On the Stability and Scalability of Node Perturbation Learning »
Naoki Hiratani · Yash Mehta · Timothy Lillicrap · Peter E Latham -
2022 Poster: Holomorphic Equilibrium Propagation Computes Exact Gradients Through Finite Size Oscillations »
Axel Laborieux · Friedemann Zenke -
2021 Poster: The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning »
Shahab Bakhtiari · Patrick Mineault · Timothy Lillicrap · Christopher Pack · Blake Richards -
2021 Poster: Online Learning Of Neural Computations From Sparse Temporal Feedback »
Lukas Braun · Tim Vogels -
2021 Poster: Towards Biologically Plausible Convolutional Networks »
Roman Pogodin · Yash Mehta · Timothy Lillicrap · Peter E Latham -
2020 Poster: Training Generative Adversarial Networks by Solving Ordinary Differential Equations »
Chongli Qin · Yan Wu · Jost Tobias Springenberg · Andy Brock · Jeff Donahue · Timothy Lillicrap · Pushmeet Kohli -
2020 Spotlight: Training Generative Adversarial Networks by Solving Ordinary Differential Equations »
Chongli Qin · Yan Wu · Jost Tobias Springenberg · Andy Brock · Jeff Donahue · Timothy Lillicrap · Pushmeet Kohli -
2019 : Panel Session: A new hope for neuroscience »
Yoshua Bengio · Blake Richards · Timothy Lillicrap · Ila Fiete · David Sussillo · Doina Precup · Konrad Kording · Surya Ganguli -
2019 : Invited Talk: Deep learning without weight transport »
Timothy Lillicrap -
2019 : Panel Discussion »
Linda Smith · Josh Tenenbaum · Lisa Anne Hendricks · James McClelland · Timothy Lillicrap · Jesse Thomason · Jason Baldridge · Louis-Philippe Morency -
2019 : Timothy Lillicrap »
Timothy Lillicrap -
2019 Poster: Experience Replay for Continual Learning »
David Rolnick · Arun Ahuja · Jonathan Richard Schwarz · Timothy Lillicrap · Gregory Wayne -
2019 Poster: Deep Learning without Weight Transport »
Mohamed Akrout · Collin Wilson · Peter Humphreys · Timothy Lillicrap · Douglas Tweed -
2018 : Invited Talk 2 »
Timothy Lillicrap -
2018 Poster: Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures »
Sergey Bartunov · Adam Santoro · Blake Richards · Luke Marris · Geoffrey E Hinton · Timothy Lillicrap -
2018 Poster: Learning Attractor Dynamics for Generative Memory »
Yan Wu · Gregory Wayne · Karol Gregor · Timothy Lillicrap -
2018 Poster: Relational recurrent neural networks »
Adam Santoro · Ryan Faulkner · David Raposo · Jack Rae · Mike Chrzanowski · Theophane Weber · Daan Wierstra · Oriol Vinyals · Razvan Pascanu · Timothy Lillicrap -
2017 : Scalable RL and AlphaGo »
Timothy Lillicrap -
2017 : Panel on "What neural systems can teach us about building better machine learning systems" »
Timothy Lillicrap · James J DiCarlo · Christopher Rozell · Viren Jain · Nathan Kutz · William Gray Roncal · Bingni Brunton -
2017 : Backpropagation and deep learning in the brain »
Timothy Lillicrap -
2017 Poster: A simple neural network module for relational reasoning »
Adam Santoro · David Raposo · David Barrett · Mateusz Malinowski · Razvan Pascanu · Peter Battaglia · Timothy Lillicrap -
2017 Spotlight: A simple neural network module for relational reasoning »
Adam Santoro · David Raposo · David Barrett · Mateusz Malinowski · Razvan Pascanu · Peter Battaglia · Timothy Lillicrap -
2017 Poster: Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning »
Shixiang (Shane) Gu · Timothy Lillicrap · Richard Turner · Zoubin Ghahramani · Bernhard Schölkopf · Sergey Levine -
2016 : Tim Lillicrap »
Timothy Lillicrap -
2016 Poster: Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes »
Jack Rae · Jonathan J Hunt · Ivo Danihelka · Tim Harley · Andrew Senior · Gregory Wayne · Alex Graves · Timothy Lillicrap -
2016 Poster: Matching Networks for One Shot Learning »
Oriol Vinyals · Charles Blundell · Timothy Lillicrap · koray kavukcuoglu · Daan Wierstra -
2015 Poster: Learning Continuous Control Policies by Stochastic Value Gradients »
Nicolas Heess · Gregory Wayne · David Silver · Timothy Lillicrap · Tom Erez · Yuval Tassa