Timezone: »

Incorporating Context into Language Encoding Models for fMRI
Shailee Jain · Alexander Huth

Thu Dec 06 07:45 AM -- 09:45 AM (PST) @ Room 210 #99

Language encoding models help explain language processing in the human brain by learning functions that predict brain responses from the language stimuli that elicited them. Current word embedding-based approaches treat each stimulus word independently and thus ignore the influence of context on language understanding. In this work we instead build encoding models using rich contextual representations derived from an LSTM language model. Our models show a significant improvement in encoding performance relative to state-of-the-art embeddings in nearly every brain area. By varying the amount of context used in the models and providing the models with distorted context, we show that this improvement is due to a combination of better word embeddings learned by the LSTM language model and contextual information. We are also able to use our models to map context sensitivity across the cortex. These results suggest that LSTM language models learn high-level representations that are related to representations in the human brain.

Author Information

Shailee Jain (The University of Texas at Austin)
Alexander Huth (The University of Texas at Austin)

More from the Same Authors