Timezone: »

Interpretable multi-timescale models for predicting fMRI responses to continuous natural speech
Shailee Jain · Vy Vo · Shivangi Mahto · Amanda LeBel · Javier Turek · Alexander Huth

Thu Dec 10 09:00 AM -- 11:00 AM (PST) @ Poster Session 5 #1681

Natural language contains information at multiple timescales. To understand how the human brain represents this information, one approach is to build encoding models that predict fMRI responses to natural language using representations extracted from neural network language models (LMs). However, these LM-derived representations do not explicitly separate information at different timescales, making it difficult to interpret the encoding models. In this work we construct interpretable multi-timescale representations by forcing individual units in an LSTM LM to integrate information over specific temporal scales. This allows us to explicitly and directly map the timescale of information encoded by each individual fMRI voxel. Further, the standard fMRI encoding procedure does not account for varying temporal properties in the encoding features. We modify the procedure so that it can capture both short- and long-timescale information. This approach outperforms other encoding models, particularly for voxels that represent long-timescale information. It also provides a finer-grained map of timescale information in the human language pathway. This serves as a framework for future work investigating temporal hierarchies across artificial and biological language systems.

Author Information

Shailee Jain (The University of Texas at Austin)
Vy Vo (Intel Corporation)
Shivangi Mahto (The University of Texas at Austin)
Amanda LeBel (The University of Texas at Austin)
Javier Turek (Intel Labs)
Alexander Huth (The University of Texas at Austin)

More from the Same Authors