`

Timezone: »

 
Oral
Drop, Swap, and Generate: A Self-Supervised Approach for Generating Neural Activity
Ran Liu · Mehdi Azabou · Max Dabagia · Chi-Heng Lin · Mohammad Gheshlaghi Azar · Keith Hengen · Michal Valko · Eva Dyer

Fri Dec 10 04:20 PM -- 04:35 PM (PST) @ None

Meaningful and simplified representations of neural activity can yield insights into how and what information is being processed within a neural circuit. However, without labels, finding representations that reveal the link between the brain and behavior can be challenging. Here, we introduce a novel unsupervised approach for learning disentangled representations of neural activity called Swap-VAE. Our approach combines a generative modeling framework with an instance-specific alignment loss that tries to maximize the representational similarity between transformed views of the input (brain state). These transformed (or augmented) views are created by dropping out neurons and jittering samples in time, which intuitively should lead the network to a representation that maintains both temporal consistency and invariance to the specific neurons used to represent the neural state. Through evaluations on both synthetic data and neural recordings from hundreds of neurons in different primate brains, we show that it is possible to build representations that disentangle neural datasets along relevant latent dimensions linked to behavior.

Author Information

Ran Liu (Georgia Institute of Technology)

I am a third-year Ph.D. candidate in the Machine Learning Program at Georgia Tech. I conduct my research in the Neural Data Science Lab advised by Prof. Eva Dyer. My research interests lie at the intersection of Machine (Deep) Learning, Computational Neuroscience, and Computer Vision.

Mehdi Azabou (Georgia Institute of Technology)
Max Dabagia (Georgia Institute of Technology)
Chi-Heng Lin (gatech)
Mohammad Gheshlaghi Azar (DeepMind)
Keith Hengen (Washington University, St. Louis)
Michal Valko (DeepMind Paris / Inria / ENS Paris-Saclay)

Michal is a research scientist in DeepMind Paris and SequeL team at Inria Lille - Nord Europe, France, lead by Philippe Preux and Rémi Munos. He also teaches the course Graphs in Machine Learning at l'ENS Cachan. Michal is primarily interested in designing algorithms that would require as little human supervision as possible. This means 1) reducing the “intelligence” that humans need to input into the system and 2) minimising the data that humans need spend inspecting, classifying, or “tuning” the algorithms. Another important feature of machine learning algorithms should be the ability to adapt to changing environments. That is why he is working in domains that are able to deal with minimal feedback, such as semi-supervised learning, bandit algorithms, and anomaly detection. The common thread of Michal's work has been adaptive graph-based learning and its application to the real world applications such as recommender systems, medical error detection, and face recognition. His industrial collaborators include Intel, Technicolor, and Microsoft Research. He received his PhD in 2011 from University of Pittsburgh under the supervision of Miloš Hauskrecht and after was a postdoc of Rémi Munos.

Eva Dyer (Georgia Institute of Technology)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors