Timezone: »
How related are the representations learned by neural language models, translation models, and language tagging tasks? We answer this question by adapting an encoder-decoder transfer learning method from computer vision to investigate the structure among 100 different feature spaces extracted from hidden representations of various networks trained on language tasks.This method reveals a low-dimensional structure where language models and translation models smoothly interpolate between word embeddings, syntactic and semantic tasks, and future word embeddings. We call this low-dimensional structure a language representation embedding because it encodes the relationships between representations needed to process language for a variety of NLP tasks. We find that this representation embedding can predict how well each individual feature space maps to human brain responses to natural language stimuli recorded using fMRI. Additionally, we find that the principal dimension of this structure can be used to create a metric which highlights the brain's natural language processing hierarchy. This suggests that the embedding captures some part of the brain's natural language representation structure.
Author Information
Richard Antonello (University of Texas, Austin)
Javier Turek (Intel Labs)
Vy Vo (Intel Corporation)
Alexander Huth (The University of Texas at Austin)
More from the Same Authors
-
2022 : Cache-memory gated graph neural networks »
Guixiang Ma · Vy Vo · Nesreen K. Ahmed · Theodore Willke -
2022 : Memory in humans and deep language models: Linking hypotheses for model augmentation »
Omri Raccah · Phoebe Chen · Theodore Willke · David Poeppel · Vy Vo -
2022 Workshop: Memory in Artificial and Real Intelligence (MemARI) »
Mariya Toneva · Javier Turek · Vy Vo · Shailee Jain · Kenneth Norman · Alexander Huth · Uri Hasson · Mihai Capotă -
2022 : Opening remarks »
Vy Vo -
2020 Poster: Interpretable multi-timescale models for predicting fMRI responses to continuous natural speech »
Shailee Jain · Vy Vo · Shivangi Mahto · Amanda LeBel · Javier Turek · Alexander Huth -
2019 Workshop: Context and Compositionality in Biological and Artificial Neural Systems »
Javier Turek · Shailee Jain · Alexander Huth · Leila Wehbe · Emma Strubell · Alan Yuille · Tal Linzen · Christopher Honey · Kyunghyun Cho -
2019 : Opening Remarks »
Alexander Huth -
2018 Poster: Incorporating Context into Language Encoding Models for fMRI »
Shailee Jain · Alexander Huth