Skip to yearly menu bar Skip to main content


Poster

BehaveNet: nonlinear embedding and Bayesian neural decoding of behavioral videos

Eleanor Batty · Matthew Whiteway · Shreya Saxena · Dan Biderman · Taiga Abe · Simon Musall · Winthrop Gillis · Jeffrey Markowitz · Anne Churchland · John Cunningham · Sandeep R Datta · Scott Linderman · Liam Paninski

East Exhibition Hall B + C #189

Keywords: [ Neural Coding ] [ Neuroscience ] [ Neuroscience and Cognitive Science ]


Abstract:

A fundamental goal of systems neuroscience is to understand the relationship between neural activity and behavior. Behavior has traditionally been characterized by low-dimensional, task-related variables such as movement speed or response times. More recently, there has been a growing interest in automated analysis of high-dimensional video data collected during experiments. Here we introduce a probabilistic framework for the analysis of behavioral video and neural activity. This framework provides tools for compression, segmentation, generation, and decoding of behavioral videos. Compression is performed using a convolutional autoencoder (CAE), which yields a low-dimensional continuous representation of behavior. We then use an autoregressive hidden Markov model (ARHMM) to segment the CAE representation into discrete "behavioral syllables." The resulting generative model can be used to simulate behavioral video data. Finally, based on this generative model, we develop a novel Bayesian decoding approach that takes in neural activity and outputs probabilistic estimates of the full-resolution behavioral video. We demonstrate this framework on two different experimental paradigms using distinct behavioral and neural recording technologies.

Live content is unavailable. Log in and register to view live content