Timezone: »
One reason learning the geometry of latent neural manifolds from neural activity data is difficult is that the ground truth is generally not known, which can make manifold learning methods hard to evaluate. Probabilistic population codes (PPCs), a class of biologically plausible and self-consistent models of neural populations that encode parametric probability distributions, may offer a theoretical setting where it is possible to rigorously study manifold learning. It is natural to define the neural manifold of a PPC as the statistical manifold of the encoded distribution, and we derive a mathematical result that the information geometry of the statistical manifold is directly related to measurable covariance matrices. This suggests a simple but rigorously justified decoding strategy based on principal component analysis, which we illustrate using an analytically tractable PPC.
Author Information
John Vastola (Harvard Medical School)
I'm a postdoc.
Zach Cohen (Harvard University)

I'm a PhD student in computational and theoretical neuroscience at Harvard University.
Jan Drugowitsch (Harvard Medical School)
More from the Same Authors
-
2020 Poster: Adaptation Properties Allow Identification of Optimized Neural Codes »
Luke Rast · Jan Drugowitsch -
2014 Poster: Optimal decision-making with time-varying evidence reliability »
Jan Drugowitsch · Ruben Moreno-Bote · Alexandre Pouget -
2014 Spotlight: Optimal decision-making with time-varying evidence reliability »
Jan Drugowitsch · Ruben Moreno-Bote · Alexandre Pouget