Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Symmetry and Geometry in Neural Representations (NeurReps)

Learning and Shaping Manifold Attractors for Computation in Gated Neural ODEs

Timothy Kim · Tankut Can · Kamesh Krishnamurthy

Keywords: [ interpretability ] [ Neural ODEs ] [ computational neuroscience ] [ Continuous Attractor Geometry ] [ Dynamical Systems ] [ differential equations ] [ gating ]


Abstract:

Understanding how the dynamics in biological and artificial neural networks implement the computations required for a task is a salient open question in machine learning and neuroscience. A particularly fruitful paradigm is computation via dynamical attractors, which is particularly relevant for computations requiring complex memory storage of continuous variables. We explore the interplay of attractor geometry and task structure in recurrent neural networks. Furthermore, we are interested in finding low-dimensional effective representations which enhance interpretability. To this end, we introduce gated neural ODEs (gnODEs) and probe their performance on a continuous memory task. The gnODEs combine the expressive power of neural ordinary differential equations (nODEs) with the trainability conferred by gating interactions. We also discover that an emergent property of the gating interaction is an inductive bias for learning (approximate) continuous (manifold) attractor solutions, necessary to solve the continuous memory task. Finally, we show how reduced-dimensional gnODEs retain their modeling power while greatly improving interpretability, even allowing explicit visualization of the manifold attractor geometry.

Chat is not available.