Timezone: »
Recurrent neural networks (RNNs) are a widely used tool for modeling sequential data, yet they are often treated as inscrutable black boxes. Given a trained recurrent network, we would like to reverse engineer it--to obtain a quantitative, interpretable description of how it solves a particular task. Even for simple tasks, a detailed understanding of how recurrent networks work, or a prescription for how to develop such an understanding, remains elusive. In this work, we use tools from dynamical systems analysis to reverse engineer recurrent networks trained to perform sentiment classification, a foundational natural language processing task. Given a trained network, we find fixed points of the recurrent dynamics and linearize the nonlinear system around these fixed points. Despite their theoretical capacity to implement complex, high-dimensional computations, we find that trained networks converge to highly interpretable, low-dimensional representations. In particular, the topological structure of the fixed points and corresponding linearized dynamics reveal an approximate line attractor within the RNN, which we can use to quantitatively understand how the RNN solves the sentiment analysis task. Finally, we find this mechanism present across RNN architectures (including LSTMs, GRUs, and vanilla RNNs) trained on multiple datasets, suggesting that our findings are not unique to a particular architecture or dataset. Overall, these results demonstrate that surprisingly universal and human interpretable computations can arise across a range of recurrent networks.
Author Information
Niru Maheswaranathan (Google Brain)
Alex Williams (Stanford University)
Matthew Golub (Stanford University)
Surya Ganguli (Stanford)
David Sussillo (Google Inc.)
More from the Same Authors
-
2020 Poster: Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel »
Stanislav Fort · Gintare Karolina Dziugaite · Mansheej Paul · Sepideh Kharaghani · Daniel Roy · Surya Ganguli -
2020 Poster: Point process models for sequence detection in high-dimensional neural spike trains »
Alex Williams · Anthony Degleris · Yixin Wang · Scott Linderman -
2020 Oral: Point process models for sequence detection in high-dimensional neural spike trains »
Alex Williams · Anthony Degleris · Yixin Wang · Scott Linderman -
2020 Poster: Predictive coding in balanced neural networks with noise, chaos and delays »
Jonathan Kadmon · Jonathan Timcheck · Surya Ganguli -
2020 Poster: Identifying Learning Rules From Neural Network Observables »
Aran Nayebi · Sanjana Srivastava · Surya Ganguli · Daniel Yamins -
2020 Spotlight: Identifying Learning Rules From Neural Network Observables »
Aran Nayebi · Sanjana Srivastava · Surya Ganguli · Daniel Yamins -
2020 Poster: Pruning neural networks without any data by iteratively conserving synaptic flow »
Hidenori Tanaka · Daniel Kunin · Daniel Yamins · Surya Ganguli -
2019 Poster: A unified theory for the origin of grid cells through the lens of pattern formation »
Ben Sorscher · Gabriel Mel · Surya Ganguli · Samuel Ocko -
2019 Poster: Universality and individuality in neural dynamics across large populations of recurrent networks »
Niru Maheswaranathan · Alex Williams · Matthew Golub · Surya Ganguli · David Sussillo -
2019 Spotlight: A unified theory for the origin of grid cells through the lens of pattern formation »
Ben Sorscher · Gabriel Mel · Surya Ganguli · Samuel Ocko -
2019 Spotlight: Universality and individuality in neural dynamics across large populations of recurrent networks »
Niru Maheswaranathan · Alex Williams · Matthew Golub · Surya Ganguli · David Sussillo -
2019 Poster: From deep learning to mechanistic understanding in neuroscience: the structure of retinal prediction »
Hidenori Tanaka · Aran Nayebi · Niru Maheswaranathan · Lane McIntosh · Stephen Baccus · Surya Ganguli -
2018 Poster: The emergence of multiple retinal cell types through efficient coding of natural movies »
Samuel Ocko · Jack Lindsey · Surya Ganguli · Stephane Deny -
2018 Poster: Statistical mechanics of low-rank tensor decomposition »
Jonathan Kadmon · Surya Ganguli -
2018 Poster: Task-Driven Convolutional Recurrent Models of the Visual System »
Aran Nayebi · Daniel Bear · Jonas Kubilius · Kohitij Kar · Surya Ganguli · David Sussillo · James J DiCarlo · Daniel Yamins -
2017 Poster: Variational Walkback: Learning a Transition Operator as a Stochastic Recurrent Net »
Anirudh Goyal ALIAS PARTH GOYAL · Nan Rosemary Ke · Surya Ganguli · Yoshua Bengio -
2017 Poster: Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice »
Jeffrey Pennington · Samuel Schoenholz · Surya Ganguli -
2016 Poster: An Online Sequence-to-Sequence Model Using Partial Conditioning »
Navdeep Jaitly · Quoc V Le · Oriol Vinyals · Ilya Sutskever · David Sussillo · Samy Bengio -
2016 Poster: Exponential expressivity in deep neural networks through transient chaos »
Ben Poole · Subhaneil Lahiri · Maithra Raghu · Jascha Sohl-Dickstein · Surya Ganguli -
2016 Poster: An equivalence between high dimensional Bayes optimal inference and M-estimation »
Madhu Advani · Surya Ganguli -
2016 Poster: Deep Learning Models of the Retinal Response to Natural Scenes »
Lane McIntosh · Niru Maheswaranathan · Aran Nayebi · Surya Ganguli · Stephen Baccus -
2015 Poster: Deep Knowledge Tracing »
Chris Piech · Jonathan Bassen · Jonathan Huang · Surya Ganguli · Mehran Sahami · Leonidas Guibas · Jascha Sohl-Dickstein -
2014 Workshop: Deep Learning and Representation Learning »
Andrew Y Ng · Yoshua Bengio · Adam Coates · Roland Memisevic · Sharanyan Chetlur · Geoffrey E Hinton · Shamim Nemati · Bryan Catanzaro · Surya Ganguli · Herbert Jaeger · Phil Blunsom · Leon Bottou · Volodymyr Mnih · Chen-Yu Lee · Rich M Schwartz -
2014 Poster: Identifying and attacking the saddle point problem in high-dimensional non-convex optimization »
Yann N Dauphin · Razvan Pascanu · Caglar Gulcehre · Kyunghyun Cho · Surya Ganguli · Yoshua Bengio -
2013 Poster: A memory frontier for complex synapses »
Subhaneil Lahiri · Surya Ganguli -
2013 Oral: A memory frontier for complex synapses »
Subhaneil Lahiri · Surya Ganguli -
2010 Poster: Short-term memory in neuronal networks through dynamical compressed sensing »
Surya Ganguli · Haim Sompolinsky