Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: UniReps: Unifying Representations in Neural Models

Increasing Brain-LLM Alignment via Information-Theoretic Compression

Mycal Tucker · Greta Tuckute


Abstract:

Recent work has discovered similarities between learned representations in large language models (LLMs) and human brain activity during language processing. However, it remains unclear what information LLM and brain representations share. In this work, inspired by a notion that brain data may include information not captured by LLMs, we apply an information bottleneck method to generate compressed representations of fMRI data. For certain brain regions in the frontal cortex, we find that compressing brain representations by a small amount increases their similarity to both BERT and GPT2 embeddings. Thus, our method not only improves LLM-brain alignment scores but also suggests important characteristics about the amount of information captured by each representation scheme.

Chat is not available.