Timezone: »

 
Is the information geometry of probabilistic population codes learnable?
John Vastola · Zach Cohen · Jan Drugowitsch

Sat Dec 03 08:30 AM -- 08:40 AM (PST) @
Event URL: https://openreview.net/forum?id=vCKJJM4Hj56 »

One reason learning the geometry of latent neural manifolds from neural activity data is difficult is that the ground truth is generally not known, which can make manifold learning methods hard to evaluate. Probabilistic population codes (PPCs), a class of biologically plausible and self-consistent models of neural populations that encode parametric probability distributions, may offer a theoretical setting where it is possible to rigorously study manifold learning. It is natural to define the neural manifold of a PPC as the statistical manifold of the encoded distribution, and we derive a mathematical result that the information geometry of the statistical manifold is directly related to measurable covariance matrices. This suggests a simple but rigorously justified decoding strategy based on principal component analysis, which we illustrate using an analytically tractable PPC.

Author Information

John Vastola (Harvard Medical School)
John Vastola

I'm a postdoc.

Zach Cohen (Harvard University)
Zach Cohen

I'm a PhD student in computational and theoretical neuroscience at Harvard University.

Jan Drugowitsch (Harvard Medical School)

More from the Same Authors