Timezone: »

 
Poster
Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamics
Niru Maheswaranathan · Alex H Williams · Matthew Golub · Surya Ganguli · David Sussillo

Tue Dec 10 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #156

Recurrent neural networks (RNNs) are a widely used tool for modeling sequential data, yet they are often treated as inscrutable black boxes. Given a trained recurrent network, we would like to reverse engineer it--to obtain a quantitative, interpretable description of how it solves a particular task. Even for simple tasks, a detailed understanding of how recurrent networks work, or a prescription for how to develop such an understanding, remains elusive. In this work, we use tools from dynamical systems analysis to reverse engineer recurrent networks trained to perform sentiment classification, a foundational natural language processing task. Given a trained network, we find fixed points of the recurrent dynamics and linearize the nonlinear system around these fixed points. Despite their theoretical capacity to implement complex, high-dimensional computations, we find that trained networks converge to highly interpretable, low-dimensional representations. In particular, the topological structure of the fixed points and corresponding linearized dynamics reveal an approximate line attractor within the RNN, which we can use to quantitatively understand how the RNN solves the sentiment analysis task. Finally, we find this mechanism present across RNN architectures (including LSTMs, GRUs, and vanilla RNNs) trained on multiple datasets, suggesting that our findings are not unique to a particular architecture or dataset. Overall, these results demonstrate that surprisingly universal and human interpretable computations can arise across a range of recurrent networks.

Author Information

Niru Maheswaranathan (Google Brain)
Alex H Williams (Stanford University)
Matthew Golub (Stanford University)
Surya Ganguli (Stanford)
David Sussillo (Google Inc.)

More from the Same Authors