Timezone: »
This workshop explores the interface between cognitive neuroscience and recent advances in AI fields that aim to reproduce human performance such as natural language processing and computer vision, and specifically deep learning approaches to such problems.
When studying the cognitive capabilities of the brain, scientists follow a system identification approach in which they present different stimuli to the subjects and try to model the response that different brain areas have of that stimulus. The goal is to understand the brain by trying to find the function that expresses the activity of brain areas in terms of different properties of the stimulus. Experimental stimuli are becoming increasingly complex with more and more people being interested in studying real life phenomena such as the perception of natural images or natural sentences. There is therefore a need for a rich and adequate vector representation of the properties of the stimulus, that we can obtain using advances in NLP, computer vision or other relevant ML disciplines.
In parallel, new ML approaches, many of which in deep learning, are inspired to a certain extent by human behavior or biological principles. Neural networks for example were originally inspired by biological neurons. More recently, processes such as attention are being used which have are inspired by human behavior. However, the large bulk of these methods are independent of findings about brain function, and it is unclear whether it is at all beneficial for machine learning to try to emulate brain function in order to achieve the same tasks that the brain achieves.
In order to shed some light on this difficult but exciting question, we bring together many experts from these converging fields to discuss these questions, in a new highly interactive format focused on short lectures from experts in both fields, followed by a guided discussion.
This workshop is a continuation of a successful workshop series: Machine Learning and Interpretation in Neuroimaging (MLINI). MLINI has already had 5 iterations in which methods for analyzing and interpreting neuroimaging data were discussed in depth. In keeping with previous tradition in the workshop, we also visit the blossoming field of machine learning applied to neuroimaging data, and specifically the recent trend of utilizing neural network models to analyze brain data, which is evolving on a seemingly orthogonal plane to the use of these algorithms to represent the information content in the brain. This way we will complete the loop of studying the advances of neural networks in neuroscience both as a source of models for brain representations, and as a tool for brain image analysis.
Thu 11:30 p.m. - 11:45 p.m.
|
Introductory remarks
(
Talk
)
|
🔗 |
Thu 11:45 p.m. - 12:00 a.m.
|
Jessica Thompson - How can deep learning advance computational modeling of sensory information processing?
(
Talk
)
Deep learning, computational neuroscience, and cognitive science have overlapping goals related to understanding intelligence such that perception and behaviour can be simulated in computational sys- tems. In neuroimaging, machine learning methods have been used to test computational models of sensory information processing. Recently, these model comparison techniques have been used to evaluate deep neural networks (DNNs) as models of sensory information processing. However, the interpretation of such model evaluations is muddied by imprecise statistical conclusions. Here, we make explicit the types of conclusions that can be drawn from these existing model comparison techniques and how these conclusions change when the model in question is a DNN. We discuss how DNNs are amenable to new model comparison techniques that allow for stronger conclusions to be made about the computational mechanisms underlying sensory information processing. |
Jessica Thompson 🔗 |
Fri 12:00 a.m. - 12:30 a.m.
|
Matthias Bethge - Texture perception in humans and machines
(
Talk
)
|
Matthias Bethge 🔗 |
Fri 12:30 a.m. - 1:00 a.m.
|
Sven Eberhardt - More Feedback, Less Depth: Approximating Human Vision with Deep Networks.
(
Talk
)
Recent advances in Deep Convolutional Networks (DCNs) supporting increasingly deep architectures have demonstrated significant gains in object recognition accuracy when trained on large labeled image databases. While a growing body of work indicates this surge in DCN performance carries concomitant improvement in fitting both neural data in higher areas of the primate visual cortex and human psychophysical data during object recognition, key differences remain. To investigate these differences, we assess the correlation between computational models and human behavioral responses on a rapid animal vs. non-animal categorization task. We find that DCN recognition accuracy increases with higher stages of visual processing (higher level stages indeed outperforming human participants on the same task) but that human decisions agree best with predictions from intermediate stages. These results suggest that while DCNs properly model visual features of intermediate complexity as used by the human visual system, more advanced visual processing relies on mechanisms not captured by these models. What kind of features do humans and DCNs base object decisions off of? To test this, we introduce a competitive web-based game for discovering features that humans use for object recognition: One participant from a pair sequentially reveals parts of an object in an image until the other correctly identifies its category. Scoring image regions according to their proximity to correct recognition yields maps of visual feature importance for individual images. We find that these ``realization'' maps exhibit only weak correlation with relevance maps derived from DCNs or image salience algorithms. Cueing DCNs to attend to features emphasized by these maps improves their object recognition accuracy. Our results thus suggest that realization maps identify visual features that humans deem important for object recognition but are not adequately captured by DCNs. Finally, we suggest a novel DCN training approach in which we base our representation on object and surface structure, rather than picture class labels, to build a more human-like visual representation. |
Sven Eberhardt 🔗 |
Fri 1:00 a.m. - 1:30 a.m.
|
Panel discussion I
(
Panel discussion
)
|
🔗 |
Fri 1:30 a.m. - 2:00 a.m.
|
Coffee Break I
|
🔗 |
Fri 2:00 a.m. - 2:30 a.m.
|
Rajesh Rao - Modeling human decision making using POMDPs
(
Talk
)
|
Rajesh PN Rao 🔗 |
Fri 2:30 a.m. - 3:00 a.m.
|
Tal Yarkoni - What does it mean to 'understand' what a neural network is doing?
(
Talk
)
In recent years, researchers have drawn strong parallels between the information-processing architectures and learned representations found in the human brain and in deep neural networks (DNNs). There is increasing interest in trying to use insights gained from either neuroscience or deep learning to reciprocally inform work in the other field. A common claim by practitioners in both fields is that we still do not understand very much about the representations learned by neural networks--whether biological or artificial. In this talk, I argue that this "mysterian" view is both surprising and troubling. It is surprising in that it is often expressed by people who demonstrably do understand an enormous amount about the systems they are studying. And it is troubling in that, if the claim is taken to be true, it does not lend itself to optimism about our future ability to understand what exactly neural networks are learning. I argue that the most productive avenues of research in both neuroscience and deep learning may be those that largely sidestep questions about information content and focus instead on architectural and algorithmic considerations. |
Tal Yarkoni 🔗 |
Fri 3:00 a.m. - 3:30 a.m.
|
Panel discussion II
(
Panel discussion
)
|
🔗 |
Fri 3:30 a.m. - 5:00 a.m.
|
Lunch Break
|
🔗 |
Fri 5:00 a.m. - 6:00 a.m.
|
Spotlight Talks
(
Spotlight
)
|
🔗 |
Fri 6:00 a.m. - 6:30 a.m.
|
Coffee Break II
|
🔗 |
Fri 6:30 a.m. - 7:30 a.m.
|
Poster Session
|
🔗 |
Fri 7:30 a.m. - 8:00 a.m.
|
Richard Socher - Tackling the Limits of Deep Learning for NLP
(
Talk
)
Deep learning has made great progress in a variety of language and vision tasks. However, there are still many practical and theoretical problems and limitations. In this talk I will introduce solutions to the following questions: How to have a single input and output encoding for words. How to predict previously unseen words during test time encounters. How to grow a single deep learning model for many increasingly complex language tasks. Can an end-to-end trainable architecture solve both visual and textual question answering? |
Richard Socher 🔗 |
Fri 8:00 a.m. - 8:30 a.m.
|
Alex Huth - Using Natural Language for Studying the Human Cortex
(
Talk
)
|
Alexander G Huth 🔗 |
Fri 8:30 a.m. - 9:00 a.m.
|
Panel discussion III
(
Panel discussion
)
|
🔗 |
Fri 9:00 a.m. - 9:30 a.m.
|
General Discussion
(
Panel discussion
)
|
🔗 |
Author Information
Leila Wehbe (UC Berkeley)
Marcel Van Gerven (Radboud University)
Moritz Grosse-Wentrup (MPG Tuebingen)
Irina Rish (IBM Research AI)
Brian Murphy (BrainWaveBank)
Georg Langs (Medical University of Vienna)
Guillermo Cecchi (IBM Research)
Anwar O Nunez-Elizalde (UC Berkeley)
More from the Same Authors
-
2019 : Poster Session »
Pravish Sainath · Mohamed Akrout · Charles Delahunt · Nathan Kutz · Guangyu Robert Yang · Joseph Marino · L F Abbott · Nicolas Vecoven · Damien Ernst · andrew warrington · Michael Kagan · Kyunghyun Cho · Kameron Harris · Leopold Grinberg · John J. Hopfield · Dmitry Krotov · Taliah Muhammad · Erick Cobos · Edgar Walker · Jacob Reimer · Andreas Tolias · Alexander Ecker · Janaki Sheth · Yu Zhang · Maciej Wołczyk · Jacek Tabor · Szymon Maszke · Roman Pogodin · Dane Corneil · Wulfram Gerstner · Baihan Lin · Guillermo Cecchi · Jenna M Reinen · Irina Rish · Guillaume Bellec · Darjan Salaj · Anand Subramoney · Wolfgang Maass · Yueqi Wang · Ari Pakman · Jin Hyung Lee · Liam Paninski · Bryan Tripp · Colin Graber · Alex Schwing · Luke Prince · Gabriel Ocker · Michael Buice · Benjamin Lansdell · Konrad Kording · Jack Lindsey · Terrence Sejnowski · Matthew Farrell · Eric Shea-Brown · Nicolas Farrugia · Victor Nepveu · Jiwoong Im · Kristin Branson · Brian Hu · Ramakrishnan Iyer · Stefan Mihalas · Sneha Aenugu · Hananel Hazan · Sihui Dai · Tan Nguyen · Doris Tsao · Richard Baraniuk · Anima Anandkumar · Hidenori Tanaka · Aran Nayebi · Stephen Baccus · Surya Ganguli · Dean Pospisil · Eilif Muller · Jeffrey S Cheng · Gaël Varoquaux · Kamalaker Dadi · Dimitrios C Gklezakos · Rajesh PN Rao · Anand Louis · Christos Papadimitriou · Santosh Vempala · Naganand Yadati · Daniel Zdeblick · Daniela M Witten · Nicholas Roberts · Vinay Prabhu · Pierre Bellec · Poornima Ramesh · Jakob H Macke · Santiago Cadena · Guillaume Bellec · Franz Scherr · Owen Marschall · Robert Kim · Hannes Rapp · Marcio Fonseca · Oliver Armitage · Jiwoong Im · Thomas Hardcastle · Abhishek Sharma · Wyeth Bair · Adrian Valente · Shane Shang · Merav Stern · Rutuja Patil · Peter Wang · Sruthi Gorantla · Peter Stratton · Tristan Edwards · Jialin Lu · Martin Ester · Yurii Vlasov · Siavash Golkar -
2019 Workshop: Context and Compositionality in Biological and Artificial Neural Systems »
Javier Turek · Shailee Jain · Alexander Huth · Leila Wehbe · Emma Strubell · Alan Yuille · Tal Linzen · Christopher Honey · Kyunghyun Cho -
2019 : Coffee Break & Poster Session »
Samia Mohinta · Andrea Agostinelli · Alexandra Moringen · Jee Hang Lee · Yat Long Lo · Wolfgang Maass · Blue Sheffer · Colin Bredenberg · Benjamin Eysenbach · Liyu Xia · Efstratios Markou · Jan Lichtenberg · Pierre Richemond · Tony Zhang · JB Lanier · Baihan Lin · William Fedus · Glen Berseth · Marta Sarrico · Matthew Crosby · Stephen McAleer · Sina Ghiassian · Franz Scherr · Guillaume Bellec · Darjan Salaj · Arinbjörn Kolbeinsson · Matthew Rosenberg · Jaehoon Shin · Sang Wan Lee · Guillermo Cecchi · Irina Rish · Elias Hajek -
2017 Poster: GP CaKe: Effective brain connectivity with causal kernels »
Luca Ambrogioni · Max Hinne · Marcel Van Gerven · Eric Maris -
2016 : Moritz Grosse-Wentrup (Max Planck Institute Tuebingen) »
Moritz Grosse-Wentrup -
2016 Invited Talk: Learning About the Brain: Neuroimaging and Beyond »
Irina Rish -
2016 Demonstration: Automated simulation and replication of fMRI experiments »
Leila Wehbe · Alexander G Huth · Fatma Deniz · Marie-Luise Kieseler · Jack Gallant -
2015 Workshop: Machine Learning and Interpretation in Neuroimaging (day 1) »
Irina Rish · Leila Wehbe · Brian Murphy · Georg Langs · Guillermo Cecchi · Moritz Grosse-Wentrup -
2014 Workshop: MLINI 2014 - 4th NIPS Workshop on Machine Learning and Interpretation in Neuroimaging: Beyond the Scanner »
Irina Rish · Georg Langs · Brian Murphy · Guillermo Cecchi · Kai-min K Chang · Leila Wehbe -
2013 Workshop: MLINI-13: Machine Learning and Interpretation in Neuroimaging (Day 2) »
Georg Langs · Brian Murphy · Kai-min K Chang · Paolo Avesani · James Haxby · Nikolaus Kriegeskorte · Susan Whitfield-Gabrieli · Irina Rish · Guillermo Cecchi · Raif Rustamov · Marius Kloft · Jonathan Young · Sina Ghiassian · Michael Coen -
2013 Workshop: MLINI-13: Machine Learning and Interpretation in Neuroimaging (Day 1) »
Georg Langs · Brian Murphy · Kai-min K Chang · Paolo Avesani · James Haxby · Nikolaus Kriegeskorte · Susan Whitfield-Gabrieli · Irina Rish · Guillermo Cecchi · Raif Rustamov · Marius Kloft · Jonathan Young · Sina Ghiassian · Michael Coen -
2012 Workshop: MLINI - 2nd NIPS Workshop on Machine Learning and Interpretation in Neuroimaging (2 day) »
Georg Langs · Irina Rish · Guillermo Cecchi · Brian Murphy · Bjoern Menze · Kai-min K Chang · Moritz Grosse-Wentrup -
2012 Workshop: MLINI - 2nd NIPS Workshop on Machine Learning and Interpretation in Neuroimaging (2 day) »
Georg Langs · Irina Rish · Guillermo Cecchi · Brian Murphy · Bjoern Menze · Kai-min K Chang · Moritz Grosse-Wentrup -
2011 Workshop: Machine Learning and Interpretation in Neuroimaging (MLINI-2011) »
Melissa K Carroll · Guillermo Cecchi · Kai-min K Chang · Moritz Grosse-Wentrup · James Haxby · Georg Langs · Anna Korhonen · Bjoern Menze · Brian Murphy · Janaina Mourao-Miranda · Vittorio Murino · Francisco Pereira · Irina Rish · Mert Sabuncu · Irina Simanova · Bertrand Thirion -
2010 Workshop: Practical Application of Sparse Modeling: Open Issues and New Directions »
Irina Rish · Alexandru Niculescu-Mizil · Guillermo Cecchi · Aurelie Lozano -
2010 Spotlight: Functional Geometry Alignment and Localization of Brain Areas »
Georg Langs · Yanmei Tie · Laura Rigolo · Alexandra Golby · Polina Golland -
2010 Session: Spotlights Session 12 »
Irina Rish -
2010 Session: Oral Session 15 »
Irina Rish -
2010 Poster: Functional Geometry Alignment and Localization of Brain Areas »
Georg Langs · Yanmei Tie · Laura Rigolo · Alexandra Golby · Polina Golland -
2009 Poster: Bayesian Source Localization with the Multivariate Laplace Prior »
Marcel Van Gerven · Botond Cseke · Robert Oostenveld · Tom Heskes -
2009 Poster: Discriminative Network Models of Schizophrenia »
Guillermo Cecchi · Irina Rish · Benjamin Thyreau · Bertrand Thirion · Marion Plaze · Jean-Luc Martinot · Marie Laure Paillere-Martinot · Jean-Baptiste Poline -
2009 Oral: Discriminative Network Models of Schizophrenia »
Guillermo Cecchi · Irina Rish · Benjamin Thyreau · Bertrand Thirion · Marion Plaze · Jean-Luc Martinot · Marie Laure Paillere-Martinot · Jean-Baptiste Poline -
2008 Workshop: New Directions in Statistical Learning for Meaningful and Reproducible fMRI Analysis »
Melissa K Carroll · Irina Rish · Francisco Pereira · Guillermo Cecchi -
2006 Workshop: Novel Applications of Dimensionality Reduction »
John Blitzer · Rajarshi Das · Irina Rish · Kilian Q Weinberger