Timezone: »
How do neural networks learn to represent information? Here, we address this question by assuming that neural networks seek to generate an optimal population representation for a fixed linear decoder. We define a loss function for the quality of the population read-out and derive the dynamical equations for both neurons and synapses from the requirement to minimize this loss. The dynamical equations yield a network of integrate-and-fire neurons undergoing Hebbian plasticity. We show that, through learning, initially regular and highly correlated spike trains evolve towards Poisson-distributed and independent spike trains with much lower firing rates. The learning rule drives the network into an asynchronous, balanced regime where all inputs to the network are represented optimally for the given decoder. We show that the network dynamics and synaptic plasticity jointly balance the excitation and inhibition received by each unit as tightly as possible and, in doing so, minimize the prediction error between the inputs and the decoded outputs. In turn, spikes are only signalled whenever this prediction error exceeds a certain value, thereby implementing a predictive coding scheme. Our work suggests that several of the features reported in cortical networks, such as the high trial-to-trial variability, the balance between excitation and inhibition, and spike-timing dependent plasticity, are simply signatures of an efficient, spike-based code.
Author Information
Ralph Bourdoukan (Ecole Normale Superieure - Inserm)
David Barrett (École Normale Supérieure)
Christian Machens (Fundacao Champalimaud PT507131827)
Sophie Denève (GNT, Ecole Normale Superieure)
More from the Same Authors
-
2020 Poster: Understanding spiking networks through convex optimization »
Allan Mancoo · Sander Keemink · Christian Machens -
2020 Poster: Compact task representations as a normative model for higher-order brain activity »
Severin Berger · Christian Machens -
2015 Poster: Enforcing balance allows local supervised learning in spiking recurrent networks »
Ralph Bourdoukan · Sophie Denève -
2014 Poster: Unsupervised learning of an efficient short-term memory network »
Pietro Vertechi · Wieland Brendel · Christian Machens -
2014 Poster: Extracting Latent Structure From Multiple Interacting Neural Populations »
Joao Semedo · Amin Zandvakili · Adam Kohn · Christian Machens · Byron M Yu -
2014 Spotlight: Unsupervised learning of an efficient short-term memory network »
Pietro Vertechi · Wieland Brendel · Christian Machens -
2014 Poster: Spatio-temporal Representations of Uncertainty in Spiking Neural Networks »
Cristina Savin · Sophie Denève -
2014 Spotlight: Spatio-temporal Representations of Uncertainty in Spiking Neural Networks »
Cristina Savin · Sophie Denève -
2013 Poster: Firing rate predictions in optimal balanced networks »
David G Barrett · Sophie Denève · Christian Machens -
2011 Poster: Demixed Principal Component Analysis »
Wieland Brendel · Ranulfo Romo · Christian Machens