Timezone: »
Natural language contains information at multiple timescales. To understand how the human brain represents this information, one approach is to build encoding models that predict fMRI responses to natural language using representations extracted from neural network language models (LMs). However, these LM-derived representations do not explicitly separate information at different timescales, making it difficult to interpret the encoding models. In this work we construct interpretable multi-timescale representations by forcing individual units in an LSTM LM to integrate information over specific temporal scales. This allows us to explicitly and directly map the timescale of information encoded by each individual fMRI voxel. Further, the standard fMRI encoding procedure does not account for varying temporal properties in the encoding features. We modify the procedure so that it can capture both short- and long-timescale information. This approach outperforms other encoding models, particularly for voxels that represent long-timescale information. It also provides a finer-grained map of timescale information in the human language pathway. This serves as a framework for future work investigating temporal hierarchies across artificial and biological language systems.
Author Information
Shailee Jain (The University of Texas at Austin)
Vy Vo (Intel Corporation)
Shivangi Mahto (The University of Texas at Austin)
Amanda LeBel (The University of Texas at Austin)
Javier Turek (Intel Labs)
Alexander Huth (The University of Texas at Austin)
More from the Same Authors
-
2022 : Cache-memory gated graph neural networks »
Guixiang Ma · Vy Vo · Nesreen K. Ahmed · Theodore Willke -
2022 : Memory in humans and deep language models: Linking hypotheses for model augmentation »
Omri Raccah · Phoebe Chen · Theodore Willke · David Poeppel · Vy Vo -
2022 Workshop: Memory in Artificial and Real Intelligence (MemARI) »
Mariya Toneva · Javier Turek · Vy Vo · Shailee Jain · Kenneth Norman · Alexander Huth · Uri Hasson · Mihai Capotă -
2022 : Opening remarks »
Vy Vo -
2021 Poster: Low-dimensional Structure in the Space of Language Representations is Reflected in Brain Responses »
Richard Antonello · Javier Turek · Vy Vo · Alexander Huth -
2019 Workshop: Context and Compositionality in Biological and Artificial Neural Systems »
Javier Turek · Shailee Jain · Alexander Huth · Leila Wehbe · Emma Strubell · Alan Yuille · Tal Linzen · Christopher Honey · Kyunghyun Cho -
2019 : Opening Remarks »
Alexander Huth -
2019 Poster: A Zero-Positive Learning Approach for Diagnosing Software Performance Regressions »
Mejbah Alam · Justin Gottschlich · Nesime Tatbul · Javier Turek · Tim Mattson · Abdullah Muzahid -
2018 Poster: Incorporating Context into Language Encoding Models for fMRI »
Shailee Jain · Alexander Huth -
2014 Poster: A Block-Coordinate Descent Approach for Large-scale Sparse Inverse Covariance Estimation »
Eran Treister · Javier S Turek