Timezone: »
Rodents navigating in a well-known environment can rapidly learn and revisit observed reward locations, often after a single trial. While the mechanism for rapid path planning is unknown, the CA3 region in the hippocampus plays an important role, and emerging evidence suggests that place cell activity during hippocampal "preplay" periods may trace out future goal-directed trajectories. Here, we show how a particular mapping of space allows for the immediate generation of trajectories between arbitrary start and goal locations in an environment, based only on the mapped representation of the goal. We show that this representation can be implemented in a neural attractor network model, resulting in bump--like activity profiles resembling those of the CA3 region of hippocampus. Neurons tend to locally excite neurons with similar place field centers, while inhibiting other neurons with distant place field centers, such that stable bumps of activity can form at arbitrary locations in the environment. The network is initialized to represent a point in the environment, then weakly stimulated with an input corresponding to an arbitrary goal location. We show that the resulting activity can be interpreted as a gradient ascent on the value function induced by a reward at the goal location. Indeed, in networks with large place fields, we show that the network properties cause the bump to move smoothly from its initial location to the goal, around obstacles or walls. Our results illustrate that an attractor network with hippocampal-like attributes may be important for rapid path planning.
Author Information
Dane Corneil (EPFL)
Wulfram Gerstner (EPFL)
More from the Same Authors
-
2022 Poster: Mesoscopic modeling of hidden spiking neurons »
Shuqi Wang · Valentin Schmutz · Guillaume Bellec · Wulfram Gerstner -
2022 Poster: Kernel Memory Networks: A Unifying Framework for Memory Modeling »
Georgios Iatropoulos · Johanni Brea · Wulfram Gerstner -
2021 Poster: Local plasticity rules can learn deep representations using self-supervised contrastive predictions »
Bernd Illing · Jean Ventura · Guillaume Bellec · Wulfram Gerstner -
2021 Poster: Fitting summary statistics of neural data with a differentiable spiking network simulator »
Guillaume Bellec · Shuqi Wang · Alireza Modirshanechi · Johanni Brea · Wulfram Gerstner -
2019 : Poster Session »
Pravish Sainath · Mohamed Akrout · Charles Delahunt · Nathan Kutz · Guangyu Robert Yang · Joseph Marino · L F Abbott · Nicolas Vecoven · Damien Ernst · andrew warrington · Michael Kagan · Kyunghyun Cho · Kameron Harris · Leopold Grinberg · John J. Hopfield · Dmitry Krotov · Taliah Muhammad · Erick Cobos · Edgar Walker · Jacob Reimer · Andreas Tolias · Alexander Ecker · Janaki Sheth · Yu Zhang · Maciej Wołczyk · Jacek Tabor · Szymon Maszke · Roman Pogodin · Dane Corneil · Wulfram Gerstner · Baihan Lin · Guillermo Cecchi · Jenna M Reinen · Irina Rish · Guillaume Bellec · Darjan Salaj · Anand Subramoney · Wolfgang Maass · Yueqi Wang · Ari Pakman · Jin Hyung Lee · Liam Paninski · Bryan Tripp · Colin Graber · Alex Schwing · Luke Prince · Gabriel Ocker · Michael Buice · Benjamin Lansdell · Konrad Kording · Jack Lindsey · Terrence Sejnowski · Matthew Farrell · Eric Shea-Brown · Nicolas Farrugia · Victor Nepveu · Jiwoong Im · Kristin Branson · Brian Hu · Ramakrishnan Iyer · Stefan Mihalas · Sneha Aenugu · Hananel Hazan · Sihui Dai · Tan Nguyen · Doris Tsao · Richard Baraniuk · Anima Anandkumar · Hidenori Tanaka · Aran Nayebi · Stephen Baccus · Surya Ganguli · Dean Pospisil · Eilif Muller · Jeffrey S Cheng · Gaël Varoquaux · Kamalaker Dadi · Dimitrios C Gklezakos · Rajesh PN Rao · Anand Louis · Christos Papadimitriou · Santosh Vempala · Naganand Yadati · Daniel Zdeblick · Daniela M Witten · Nicholas Roberts · Vinay Prabhu · Pierre Bellec · Poornima Ramesh · Jakob H Macke · Santiago Cadena · Guillaume Bellec · Franz Scherr · Owen Marschall · Robert Kim · Hannes Rapp · Marcio Fonseca · Oliver Armitage · Jiwoong Im · Thomas Hardcastle · Abhishek Sharma · Wyeth Bair · Adrian Valente · Shane Shang · Merav Stern · Rutuja Patil · Peter Wang · Sruthi Gorantla · Peter Stratton · Tristan Edwards · Jialin Lu · Martin Ester · Yurii Vlasov · Siavash Golkar -
2015 Oral: Attractor Network Dynamics Enable Preplay and Rapid Path Planning in Maze–like Environments »
Dane Corneil · Wulfram Gerstner -
2011 Poster: Variational Learning for Recurrent Spiking Networks »
Danilo J Rezende · Daan Wierstra · Wulfram Gerstner -
2011 Poster: From Stochastic Nonlinear Integrate-and-Fire to Generalized Linear Models »
Skander Mensi · Richard Naud · Wulfram Gerstner -
2010 Poster: Rescaling, thinning or complementing? On goodness-of-fit procedures for point process models and Generalized Linear Models »
Felipe Gerhard · Wulfram Gerstner -
2009 Poster: Code-specific policy gradient rules for spiking neurons »
Henning Sprekeler · Guillaume Hennequin · Wulfram Gerstner -
2008 Poster: Stress, noradrenaline, and realistic prediction of mouse behaviour using reinforcement learning »
Gediminas Luksys · Carmen Sandi · Wulfram Gerstner -
2008 Oral: Stress, noradrenaline, and realistic prediction of mouse behaviour using reinforcement learning »
Gediminas Luksys · Carmen Sandi · Wulfram Gerstner -
2007 Poster: An online Hebbian learning rule that performs Independent Component Analysis »
Claudia Clopath · André Longtin · Wulfram Gerstner -
2006 Poster: Effects of Stress and Genotype on Meta-parameter Dynamics in Reinforcement Learning »
Gediminas Luksys · Jeremie Knuesel · Denis Sheynikhovich · Carmen Sandi · Wulfram Gerstner