Timezone: »

Context and Compositionality in Biological and Artificial Neural Systems
Javier Turek · Shailee Jain · Alexander Huth · Leila Wehbe · Emma Strubell · Alan Yuille · Tal Linzen · Christopher Honey · Kyunghyun Cho

Sat Dec 14 08:00 AM -- 06:00 PM (PST) @ West 217 - 219
Event URL: https://context-composition.github.io/ »

The ability to integrate semantic information across narratives is fundamental to language understanding in both biological and artificial cognitive systems. In recent years, enormous strides have been made in NLP and Machine Learning to develop architectures and techniques that effectively capture these effects. The field has moved away from traditional bag-of-words approaches that ignore temporal ordering, and instead embraced RNNs, Temporal CNNs and Transformers, which incorporate contextual information at varying timescales. While these architectures have lead to state-of-the-art performance on many difficult language understanding tasks, it is unclear what representations these networks learn and how exactly they incorporate context. Interpreting these networks, systematically analyzing the advantages and disadvantages of different elements, such as gating or attention, and reflecting on the capacity of the networks across various timescales are open and important questions.

On the biological side, recent work in neuroscience suggests that areas in the brain are organized into a temporal hierarchy in which different areas are not only sensitive to specific semantic information but also to the composition of information at different timescales. Computational neuroscience has moved in the direction of leveraging deep learning to gain insights about the brain. By answering questions on the underlying mechanisms and representational interpretability of these artificial networks, we can also expand our understanding of temporal hierarchies, memory, and capacity effects in the brain.

In this workshop we aim to bring together researchers from machine learning, NLP, and neuroscience to explore and discuss how computational models should effectively capture the multi-timescale, context-dependent effects that seem essential for processes such as language understanding.

We invite you to submit papers related to the following (non-exahustive) topics:
* Contextual sequence processing in the human brain
* Compositional representations in the human brain
* Systematic generalization in deep learning
* Compositionality in human intelligence
* Compositionality in natural language
* Understanding composition and temporal processing in neural network models
* New approaches to compositionality and temporal processing in language
* Hierarchical representations of temporal information
* Datasets for contextual sequence processing
* Applications of compositional neural networks to real-world problems

Submissions should be up to 4 pages excluding references, and should be NIPS format and anonymous. The review process is double-blind.

We also welcome published papers that are within the scope of the workshop (without re-formatting). This specific papers do not have to be anonymous. They will only have a very light review process.

Sat 8:00 a.m. - 8:15 a.m.

Note: schedule not final and may change

Alexander Huth
Sat 8:15 a.m. - 9:00 a.m.

Note: schedule not final and may change

Gary Marcus
Sat 9:00 a.m. - 9:45 a.m.

Note: schedule not final and may change

Gina Kuperberg
Sat 9:45 a.m. - 10:30 a.m.
Poster Session + Break (Poster Session)
Sat 10:30 a.m. - 10:40 a.m.

By Paul Soulos, R. Thomas Mccoy, Tal Linzen, Paul Smolensky

Paul Soulos
Sat 10:40 a.m. - 10:50 a.m.

by Robert Kim, Terry Sejnowski

Robert Kim
Sat 10:50 a.m. - 11:00 a.m.

By Maxwell Nye, Armando Solar-Lezama, Joshua Tenenbaum, Brenden Lake

Maxwell Nye
Sat 11:00 a.m. - 12:00 p.m.

Cognitive neuroscience has always sought to understand the computational processes that occur in the brain. Despite this, years of brain imaging studies have shown us only where in the brain we can observe neural activity correlated with particular types of processing, and when. It has taught us remarkably little about the key question of how the brain computes the neural representations we observe.

The good news is that a new paradigm has begun to emerge over the past few years, to directly address the how question. The key idea in this paradigm shift is to create explicit hypotheses concerning how computation is done in the brain, in the form of computer programs that perform the same computation (e.g., visual object recognition, sentence processing, equation solving). Alternative hypotheses can then be tested to see which computer program aligns best with the observed neural activity when humans and the program process the same input stimuli. We will use our work studying language processing as a case study to illustrate this new paradigm, in our case using ELMo and BERT deep neural networks as the computer programs that process the same input sentences as the human. Using this case study, we will examine the potential and the limits of this new paradigm as a route toward understanding how the brain computes.

Tom Mitchell
Sat 12:00 p.m. - 2:00 p.m.
Poster Session + Lunch (Poster Session)
Maxwell Nye, Robert Kim, Toby St Clere Smithe, Takeshi D. Itoh, Omar U. Florez, Vesna G. Djokic, Sneha Aenugu, Mariya Toneva, Imanol Schlag, Dan Schwartz, Max Raphael Sobroza Marques, Pravish Sainath, Peng-Hsuan Li, Rishi Bommasani, Najoung Kim, Paul Soulos, Steven Frankland, Nadia Chirkova, Dongqi Han, Adam Kortylewski, Rich Pang, Milena Rabovsky, Jonathan Mamou, Vaibhav Kumar, Tales Marra
Sat 2:00 p.m. - 3:00 p.m.

Note: schedule not final and may change

Yoshua Bengio
Sat 3:00 p.m. - 3:30 p.m.
Ev Fedorenko - Composition as the core driver of the human language system (Talk)
Ev Fedorenko
Sat 3:30 p.m. - 4:00 p.m.
Break (Poster Session)
Sat 4:00 p.m. - 5:30 p.m.

Note: schedule not final and may change

Ted Willke, Ev Fedorenko, Kenton Lee, Paul Smolensky
Sat 5:30 p.m. - 5:45 p.m.
Closing remarks (Talk)
Leila Wehbe

Author Information

Javier Turek (Intel Labs)
Shailee Jain (The University of Texas at Austin)
Alexander Huth (The University of Texas at Austin)
Leila Wehbe (Carnegie Mellon University)
Emma Strubell (FAIR / CMU)
Alan Yuille (Johns Hopkins University)
Tal Linzen (Johns Hopkins University)
Christopher Honey (Johns Hopkins University)
Kyunghyun Cho (New York University)

Kyunghyun Cho is an associate professor of computer science and data science at New York University and a research scientist at Facebook AI Research. He was a postdoctoral fellow at the Université de Montréal until summer 2015 under the supervision of Prof. Yoshua Bengio, and received PhD and MSc degrees from Aalto University early 2014 under the supervision of Prof. Juha Karhunen, Dr. Tapani Raiko and Dr. Alexander Ilin. He tries his best to find a balance among machine learning, natural language processing, and life, but almost always fails to do so.

More from the Same Authors