Timezone: »
Fitting network models to neural activity is an important tool in neuroscience. A popular approach is to model a brain area with a probabilistic recurrent spiking network whose parameters maximize the likelihood of the recorded activity. Although this is widely used, we show that the resulting model does not produce realistic neural activity. To correct for this, we suggest to augment the log-likelihood with terms that measure the dissimilarity between simulated and recorded activity. This dissimilarity is defined via summary statistics commonly used in neuroscience and the optimization is efficient because it relies on back-propagation through the stochastically simulated spike trains. We analyze this method theoretically and show empirically that it generates more realistic activity statistics. We find that it improves upon other fitting algorithms for spiking network models like GLMs (Generalized Linear Models) which do not usually rely on back-propagation. This new fitting algorithm also enables the consideration of hidden neurons which is otherwise notoriously hard, and we show that it can be crucial when trying to infer the network connectivity from spike recordings.
Author Information
Guillaume Bellec (Graz University of Technology)
Shuqi Wang (EPFL)
Alireza Modirshanechi (EPFL)
Johanni Brea (Swiss Federal Institute of Technology Lausanne)
Wulfram Gerstner (EPFL)
More from the Same Authors
-
2021 : Neural NID Rules »
Luca Viano · Johanni Brea -
2021 Poster: Local plasticity rules can learn deep representations using self-supervised contrastive predictions »
Bernd Illing · Jean Ventura · Guillaume Bellec · Wulfram Gerstner -
2019 : Poster Session »
Pravish Sainath · Mohamed Akrout · Charles Delahunt · Nathan Kutz · Guangyu Robert Yang · Joseph Marino · L F Abbott · Nicolas Vecoven · Damien Ernst · andrew warrington · Michael Kagan · Kyunghyun Cho · Kameron Harris · Leopold Grinberg · John J. Hopfield · Dmitry Krotov · Taliah Muhammad · Erick Cobos · Edgar Walker · Jacob Reimer · Andreas Tolias · Alexander Ecker · Janaki Sheth · Yu Zhang · Maciej Wołczyk · Jacek Tabor · Szymon Maszke · Roman Pogodin · Dane Corneil · Wulfram Gerstner · Baihan Lin · Guillermo Cecchi · Jenna M Reinen · Irina Rish · Guillaume Bellec · Darjan Salaj · Anand Subramoney · Wolfgang Maass · Yueqi Wang · Ari Pakman · Jin Hyung Lee · Liam Paninski · Bryan Tripp · Colin Graber · Alex Schwing · Luke Prince · Gabriel Ocker · Michael Buice · Benjamin Lansdell · Konrad Kording · Jack Lindsey · Terrence Sejnowski · Matthew Farrell · Eric Shea-Brown · Nicolas Farrugia · Victor Nepveu · Jiwoong Im · Kristin Branson · Brian Hu · Ramakrishnan Iyer · Stefan Mihalas · Sneha Aenugu · Hananel Hazan · Sihui Dai · Tan Nguyen · Doris Tsao · Richard Baraniuk · Anima Anandkumar · Hidenori Tanaka · Aran Nayebi · Stephen Baccus · Surya Ganguli · Dean Pospisil · Eilif Muller · Jeffrey S Cheng · Gaël Varoquaux · Kamalaker Dadi · Dimitrios C Gklezakos · Rajesh PN Rao · Anand Louis · Christos Papadimitriou · Santosh Vempala · Naganand Yadati · Daniel Zdeblick · Daniela M Witten · Nicholas Roberts · Vinay Prabhu · Pierre Bellec · Poornima Ramesh · Jakob H Macke · Santiago Cadena · Guillaume Bellec · Franz Scherr · Owen Marschall · Robert Kim · Hannes Rapp · Marcio Fonseca · Oliver Armitage · Jiwoong Im · Thomas Hardcastle · Abhishek Sharma · Wyeth Bair · Adrian Valente · Shane Shang · Merav Stern · Rutuja Patil · Peter Wang · Sruthi Gorantla · Peter Stratton · Tristan Edwards · Jialin Lu · Martin Ester · Yurii Vlasov · Siavash Golkar -
2018 Poster: Long short-term memory and Learning-to-learn in networks of spiking neurons »
Guillaume Bellec · Darjan Salaj · Anand Subramoney · Robert Legenstein · Wolfgang Maass -
2015 Poster: Attractor Network Dynamics Enable Preplay and Rapid Path Planning in Maze–like Environments »
Dane S Corneil · Wulfram Gerstner -
2015 Oral: Attractor Network Dynamics Enable Preplay and Rapid Path Planning in Maze–like Environments »
Dane S Corneil · Wulfram Gerstner -
2011 Poster: Variational Learning for Recurrent Spiking Networks »
Danilo J Rezende · Daan Wierstra · Wulfram Gerstner -
2011 Poster: From Stochastic Nonlinear Integrate-and-Fire to Generalized Linear Models »
Skander Mensi · Richard Naud · Wulfram Gerstner -
2010 Poster: Rescaling, thinning or complementing? On goodness-of-fit procedures for point process models and Generalized Linear Models »
Felipe Gerhard · Wulfram Gerstner -
2009 Poster: Code-specific policy gradient rules for spiking neurons »
Henning Sprekeler · Guillaume Hennequin · Wulfram Gerstner -
2008 Poster: Stress, noradrenaline, and realistic prediction of mouse behaviour using reinforcement learning »
Gediminas Luksys · Carmen Sandi · Wulfram Gerstner -
2008 Oral: Stress, noradrenaline, and realistic prediction of mouse behaviour using reinforcement learning »
Gediminas Luksys · Carmen Sandi · Wulfram Gerstner -
2007 Poster: An online Hebbian learning rule that performs Independent Component Analysis »
Claudia Clopath · André Longtin · Wulfram Gerstner -
2006 Poster: Effects of Stress and Genotype on Meta-parameter Dynamics in Reinforcement Learning »
Gediminas Luksys · Jeremie Knuesel · Denis Sheynikhovich · Carmen Sandi · Wulfram Gerstner