Skip to yearly menu bar Skip to main content


Poster

Temporal alignment and latent Gaussian process factor inference in population spike trains

Lea Duncker · Maneesh Sahani

Room 210 #71

Keywords: [ Latent Variable Models ] [ Neuroscience ] [ Gaussian Processes ]


Abstract:

We introduce a novel scalable approach to identifying common latent structure in neural population spike-trains, which allows for variability both in the trajectory and in the rate of progression of the underlying computation. Our approach is based on shared latent Gaussian processes (GPs) which are combined linearly, as in the Gaussian Process Factor Analysis (GPFA) algorithm. We extend GPFA to handle unbinned spike-train data by incorporating a continuous time point-process likelihood model, achieving scalability with a sparse variational approximation. Shared variability is separated into terms that express condition dependence, as well as trial-to-trial variation in trajectories. Finally, we introduce a nested GP formulation to capture variability in the rate of evolution along the trajectory. We show that the new method learns to recover latent trajectories in synthetic data, and can accurately identify the trial-to-trial timing of movement-related parameters from motor cortical data without any supervision.

Live content is unavailable. Log in and register to view live content