Timezone: »

 
Workshop
Real Neurons & Hidden Units: future directions at the intersection of neuroscience and AI
Guillaume Lajoie · Eli Shlizerman · Maximilian Puelma Touzel · Jessica Thompson · Konrad Kording

Sat Dec 14 08:00 AM -- 06:20 PM (PST) @ East Ballroom A
Event URL: https://sites.google.com/mila.quebec/neuroaiworkshop/ »

Recent years have witnessed an explosion of progress in AI. With it, a proliferation of experts and practitioners are pushing the boundaries of the field without regard to the brain. This is in stark contrast with the field's transdisciplinary origins, when interest in designing intelligent algorithms was shared by neuroscientists, psychologists and computer scientists alike. Similar progress has been made in neuroscience where novel experimental techniques now afford unprecedented access to brain activity and function. However, it is unclear how to maximize them to truly advance an end-to-end understanding of biological intelligence. The traditional neuroscience research program, however, lacks frameworks to truly advance an end-to-end understanding of biological intelligence. For the first time, mechanistic discoveries emerging from deep learning, reinforcement learning and other AI fields may be able to steer fundamental neuroscience research in ways beyond standard uses of machine learning for modelling and data analysis. For example, successful training algorithms in artificial networks, developed without biological constraints, can motivate research questions and hypotheses about the brain. Conversely, a deeper understanding of brain computations at the level of large neural populations may help shape future directions in AI. This workshop aims to address this novel situation by building on existing AI-Neuro relationships but, crucially, outline new directions for artificial systems and next-generation neuroscience experiments. We invite contributions concerned with the modern intersection between neuroscience and AI and in particular, addressing questions that can only now be tackled due to recent progress in AI on the role of recurrent dynamics, inductive biases to guide learning, global versus local learning rules, and interpretability of network activity. This workshop will promote discussion and showcase diverse perspectives on these open questions.

Sat 8:15 a.m. - 8:30 a.m. [iCal]
Opening Remarks (announcements)
Guillaume Lajoie, Jessica Thompson, Maximilian Puelma Touzel, Eli Shlizerman, Konrad Kording
Sat 8:30 a.m. - 9:00 a.m. [iCal]
Invited Talk: Hierarchical Reinforcement Learning: Computational Advances and Neuroscience Connections (talk)
Doina Precup
Sat 9:00 a.m. - 9:30 a.m. [iCal]

Recent advances in machine learning have been made possible by employing the backpropagation-of-error algorithm. Backprop enables the delivery of detailed error feedback across multiple layers of representation to adjust synaptic weights, allowing us to effectively train even very large networks. Whether or not the brain employs similar deep learning algorithms remains contentious; how it might do so remains a mystery. In particular, backprop uses the weights in the forward pass of the network to precisely compute error feedback in the backward pass. This way of computing errors across multiple layers is fundamentally at odds with what we know about the local computations of brains. We will describe new proposals for biologically motivated learning algorithms that are as effective as backpropagation without requiring weight transport.

Timothy Lillicrap
Sat 9:30 a.m. - 9:45 a.m. [iCal]
Contributed talk: Eligibility traces provide a data-inspired alternative to backpropagation through time. Guillaume Bellec, Franz Scherr, Elias Hajek, Darjan Salaj, Anand Subramoney, Robert Legenstein, Wolfgang Maass (talk)
Sat 9:45 a.m. - 10:30 a.m. [iCal]
Coffee Break + Posters (break)
Sat 10:30 a.m. - 11:00 a.m. [iCal]

One key distinction between artificial and biological neural networks is the presence of noise, both intrinsic, e.g. due to synaptic failures, and extrinsic, arising through complex recurrent dynamics. Traditionally, this noise has been viewed as a ‘bug’, and the main computational challenge that the brain needs to face. More recently, it has been argued that circuit stochasticity may be a ‘feature', in that can be recruited for useful computations, such as representing uncertainty about the state of the world. Here we lay out a new argument for the role of stochasticity during learning. In particular, we use a mathematically tractable stochastic neural network model that allows us to derive local plasticity rules for optimizing a given global objective. This rule leads to representations that reflect both task structure and stimuli priors in interesting ways. Moreover, in this framework stochasticity is both a feature, as learning cannot happen in the absence of noise, and a bug, as the noise corrupts neural representations. Importantly, the network learns to use recurrent interactions to compensate for its negative effects, and maintain robust circuit function.

Cristina Savin
Sat 11:00 a.m. - 11:30 a.m. [iCal]
Invited Talk: Universality and individuality in neural dynamics across large populations of recurrent networks (talk)
David Sussillo
Sat 11:30 a.m. - 11:45 a.m. [iCal]
Contributed talk: How well do deep neural networks trained on object recognition characterize the mouse visual system? Santiago A. Cadena, Fabian H. Sinz, Taliah Muhammad, Emmanouil Froudarakis, Erick Cobos, Edgar Y. Walker, Jake Reimer, Matthias Bethge, (talk)
Sat 11:45 a.m. - 12:00 p.m. [iCal]
Contributed talk: Functional Annotation of Human Cognitive States using Graph Convolution Networks Yu Zhang, Pierre Bellec (talk)
Sat 12:00 p.m. - 2:00 p.m. [iCal]
Lunch Break (break)
Sat 2:00 p.m. - 2:30 p.m. [iCal]
Invited Talk: Simultaneous rigidity and flexibility through modularity in cognitive maps for navigation (talk)
Ila Fiete
Sat 2:30 p.m. - 3:00 p.m. [iCal]
Invited Talk: Theories for the emergence of internal representations in neural networks: from perception to navigation (talk)
Surya Ganguli
Sat 3:00 p.m. - 3:15 p.m. [iCal]
Contributed talk: Adversarial Training of Neural Encoding Models on Population Spike Trains Poornima Ramesh, Mohamad Atayi, Jakob H Macke (talk)
Sat 3:15 p.m. - 3:30 p.m. [iCal]
Contributed talk: Learning to Learn with Feedback and Local Plasticity. Jack Lindsey (talk)
Sat 3:30 p.m. - 4:15 p.m. [iCal]
Coffee Break + Posters (break)
Sat 4:15 p.m. - 4:45 p.m. [iCal]
Poster Session (posters)
Pravish Sainath, Mohamed Akrout, Charles Delahunt, Nathan Kutz, Guangyu Yang, Joe Marino, L F Abbott, Nicolas Vecoven, Damien Ernst, andrew warrington, Michael Kagan, Kyunghyun Cho, Kameron Harris, Leopold Grinberg, John J. Hopfield, Dmitry Krotov, Taliah Muhammad, Erick Cobos, Edgar Walker, Jacob Reimer, Andreas Tolias, Alexander Ecker, Janaki Sheth, Yu Zhang, Maciej Wołczyk, Jacek Tabor, Szymon Maszke, Roman Pogodin, Dane Corneil, Wulfram Gerstner, Baihan Lin, Guillermo Cecchi, Jenna M Reinen, Irina Rish, Guillaume Bellec, Darjan Salaj, Anand Subramoney, Wolfgang Maass, Yueqi Wang, Ari Pakman, Jin Hyung Lee, Liam Paninski, Bryan Tripp, Colin Graber, Alex Schwing, Luke Prince, Gabriel Ocker, Michael Buice, Ben Lansdell, Konrad Kording, Jack Lindsey, Terrence J Sejnowski, Matthew Farrell, Eric Shea-Brown, Nicolas Farrugia, Victor Nepveu, Daniel Im, Kristin Branson, Brian Hu, Ram Iyer, Stefan Mihalas, Sneha Aenugu, Hananel Hazan, Sophie Dai, Minh Nguyen, Ying Tsao, Richard Baraniuk, Anima Anandkumar, Hidenori Tanaka, Aran Nayebi, Stephen Baccus, Surya Ganguli, Dean Pospisil, Eilif Muller, Jeffrey S Cheng, Gaël Varoquaux, Kamalaker Dadi, Dimitrios C Gklezakos, Rajesh PN Rao, Anand Louis, Christos Papadimitriou, Santosh Vempala, Naganand Yadati, Daniel Zdeblick, Daniela M Witten, Nick Roberts, Vinay Prabhu, Pierre Bellec, Poornima Ramesh, Jakob H Macke, Santiago Cadena, Guillaume Bellec, Franz Scherr, Owen Marschall, Robert Kim, Hannes Rapp, Marcio Fonseca, Oliver Armitage, Jiwoong Im, Thomas Hardcastle, Abhishek Sharma, Wyeth Bair, Adrian Valente, Shane Shang, Merav Stern, Rutuja Patil, Peter Wang, Sruthi Gorantla, Peter Stratton, Tristan Edwards, Jialin Lu, Martin Ester, Yurii Vlasov, Siavash Golkar
Sat 4:45 p.m. - 5:15 p.m. [iCal]

Many models have postulated that the neocortex implements hierarchical inference system, whereby each region sends predictions of the inputs it expects to lower-order regions, allowing the latter to learn from any prediction errors. The combining of top-down predictions with bottom-up sensory information to generate errors that can then be communicated across the hierarchy is critical to credit assignment in deep predictive learning algorithms. Indirect experimental evidence supporting a hierarchical prediction system in the neocortex comes from both human and animal work. However, direct evidence for top-down guided prediction errors in the neocortex that can be used for deep credit assignment during unsupervised learning remains limited. Here, we address this issue with 2-photon calcium imaging of layer 2/3 and layer 5 pyramidal neurons in the primary visual cortex of awake mice during passive exposure to visual stimuli where unexpected events occur. To assess the evidence for top-down guided prediction errors we recorded from both the somatic compartments, and the apical dendrites in layer 1, where a large number of top-down inputs are received. We find evidence for a diversity of prediction error signals depending on both the stimulus type and cell type. These signals can be learnt in some cases, and in turn, they appear to drive some learning. This data will help us to both understand hierarchical inference in the neocortex, and potentially guide new unsupervised techniques for machine learning.

Blake Richards
Sat 5:15 p.m. - 6:00 p.m. [iCal]
Panel Session: A new hope for neuroscience (panel)
Yoshua Bengio, Blake Richards, Timothy Lillicrap, Ila Fiete, David Sussillo, Doina Precup, Konrad Kording, Surya Ganguli

Author Information

Guillaume Lajoie (Université de Montréal / Mila)
Eli Shlizerman (Departments of Applied Mathematics and Electrical & Computer Engineering, University of Washington Seattle)
Maximilian Puelma Touzel (Mila)
Jessica Thompson (Université de Montréal)
Konrad Kording (Upenn)

More from the Same Authors