Timezone: »
Recent proposals suggest that large, generic neuronal networks could store memory traces of past input sequences in their instantaneous state. Such a proposal raises important theoretical questions about the duration of these memory traces and their dependence on network size, connectivity and signal statistics. Prior work, in the case of gaussian input sequences and linear neuronal networks, shows that the duration of memory traces in a network cannot exceed the number of neurons (in units of the neuronal time constant), and that no network can outperform an equivalent feedforward network. However a more ethologically relevant scenario is that of sparse input sequences. In this scenario, we show how linear neural networks can essentially perform compressed sensing (CS) of past inputs, thereby attaining a memory capacity that {\it exceeds} the number of neurons. This enhanced capacity is achieved by a class of ``orthogonal" recurrent networks and not by feedforward networks or generic recurrent networks. We exploit techniques from the statistical physics of disordered systems to analytically compute the decay of memory traces in such networks as a function of network size, signal sparsity and integration time. Alternately, viewed purely from the perspective of CS, this work introduces a new ensemble of measurement matrices derived from dynamical systems, and provides a theoretical analysis of their asymptotic performance.
Author Information
Surya Ganguli (Stanford)
Haim Sompolinsky (Hebrew University and Harvard University)
More from the Same Authors

2019 Poster: A unified theory for the origin of grid cells through the lens of pattern formation »
Ben Sorscher · Gabriel Mel · Surya Ganguli · Samuel Ocko 
2019 Poster: Universality and individuality in neural dynamics across large populations of recurrent networks »
Niru Maheswaranathan · Alex H Williams · Matthew Golub · Surya Ganguli · David Sussillo 
2019 Spotlight: A unified theory for the origin of grid cells through the lens of pattern formation »
Ben Sorscher · Gabriel Mel · Surya Ganguli · Samuel Ocko 
2019 Spotlight: Universality and individuality in neural dynamics across large populations of recurrent networks »
Niru Maheswaranathan · Alex H Williams · Matthew Golub · Surya Ganguli · David Sussillo 
2019 Poster: From deep learning to mechanistic understanding in neuroscience: the structure of retinal prediction »
Hidenori Tanaka · Aran Nayebi · Niru Maheswaranathan · Lane McIntosh · Stephen Baccus · Surya Ganguli 
2019 Poster: Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamics »
Niru Maheswaranathan · Alex H Williams · Matthew Golub · Surya Ganguli · David Sussillo 
2018 Poster: The emergence of multiple retinal cell types through efficient coding of natural movies »
Samuel Ocko · Jack Lindsey · Surya Ganguli · Stephane Deny 
2018 Poster: Statistical mechanics of lowrank tensor decomposition »
Jonathan Kadmon · Surya Ganguli 
2018 Poster: TaskDriven Convolutional Recurrent Models of the Visual System »
Aran Nayebi · Daniel Bear · Jonas Kubilius · Kohitij Kar · Surya Ganguli · David Sussillo · James J DiCarlo · Daniel Yamins 
2017 Poster: Variational Walkback: Learning a Transition Operator as a Stochastic Recurrent Net »
Anirudh Goyal ALIAS PARTH GOYAL · Nan Rosemary Ke · Surya Ganguli · Yoshua Bengio 
2017 Poster: Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice »
Jeffrey Pennington · Samuel Schoenholz · Surya Ganguli 
2016 Poster: Optimal Architectures in a Solvable Model of Deep Networks »
Jonathan Kadmon · Haim Sompolinsky 
2016 Poster: Exponential expressivity in deep neural networks through transient chaos »
Ben Poole · Subhaneil Lahiri · Maithra Raghu · Jascha SohlDickstein · Surya Ganguli 
2016 Poster: An equivalence between high dimensional Bayes optimal inference and Mestimation »
Madhu Advani · Surya Ganguli 
2016 Poster: Deep Learning Models of the Retinal Response to Natural Scenes »
Lane McIntosh · Niru Maheswaranathan · Aran Nayebi · Surya Ganguli · Stephen Baccus 
2015 Invited Talk: Computational Principles for Deep Neuronal Architectures »
Haim Sompolinsky 
2015 Poster: Deep Knowledge Tracing »
Chris Piech · Jonathan Bassen · Jonathan Huang · Surya Ganguli · Mehran Sahami · Leonidas J Guibas · Jascha SohlDickstein 
2014 Workshop: Deep Learning and Representation Learning »
Andrew Y Ng · Yoshua Bengio · Adam Coates · Roland Memisevic · Sharanyan Chetlur · Geoffrey E Hinton · Shamim Nemati · Bryan Catanzaro · Surya Ganguli · Herbert Jaeger · Phil Blunsom · Leon Bottou · Volodymyr Mnih · ChenYu Lee · Rich M Schwartz 
2014 Poster: Identifying and attacking the saddle point problem in highdimensional nonconvex optimization »
Yann N Dauphin · Razvan Pascanu · Caglar Gulcehre · Kyunghyun Cho · Surya Ganguli · Yoshua Bengio 
2013 Poster: A memory frontier for complex synapses »
Subhaneil Lahiri · Surya Ganguli 
2013 Oral: A memory frontier for complex synapses »
Subhaneil Lahiri · Surya Ganguli 
2010 Poster: Inferring Stimulus Selectivity from the Spatial Structure of Neural Network Dynamics »
Kanaka Rajan · L F Abbott · Haim Sompolinsky