Skip to yearly menu bar Skip to main content


Poster

Flexible mapping of abstract domains by grid cells via self-supervised extraction and projection of generalized velocity signals

Abhiram Iyer · Sarthak Chandra · Sugandha Sharma · Ila Fiete

[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Grid cells in the medial entorhinal cortex create remarkable periodic maps of explored space during navigation. Recent studies show that they form similar maps of abstract cognitive spaces. Examples of such abstract environments include auditory tone sequences in which the pitch is continuously varied or images in which abstract features are continuously deformed (e.g., a cartoon bird whose legs stretch and shrink). Here we hypothesize that the brain generalizes how it maps spatial domains to mapping abstract spaces by extracting self-consistent and low-dimensional descriptions of displacements through these abstract spaces, and then leveraging the spatial velocity integration capability of grid cells to efficiently build maps of different domains. Our neural circuit for abstract velocity extraction factorizes the content of these abstract domains from displacements within the domains to generate content-independent and self-consistent low-dimensional velocity estimates. Crucially, it uses a self-supervised geometric consistency constraint that requires displacements along closed loop trajectories to sum to zero, an integration that is itself performed by the downstream grid cell circuit over learning. This process results in high fidelity estimates of velocities and allowed transitions in abstract domains, a crucial prerequisite for efficient map generation in these high-dimensional environments. We also show how our method outperforms traditional dimensionality reduction and deep-learning based motion extraction networks on the same set of tasks. This is the first neural circuit model to explain how grid cells can flexibly represent different abstract spaces and makes the novel prediction that they should do so while maintaining their population correlation and manifold structure across domains. Fundamentally, our model sheds light on the mechanistic origins of cognitive flexibility and transfer of representations across vastly different domains in brains, providing a potential self-supervised learning (SSL) framework for leveraging similar ideas in transfer learning and data-efficient generalization in machine learning and robotics.

Live content is unavailable. Log in and register to view live content