Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Shared Visual Representations in Human and Machine Intelligence

Multimodal neural networks better explain multivoxel patterns in the hippocampus

Bhavin Choksi · Milad Mozafari · Rufin VanRullen · Leila Reddy


Abstract:

The human hippocampus possesses concept cells'', neurons that fire when presented with stimuli belonging to a specific concept, regardless of the modality. Recently, similar concept cells were discovered in a multimodal network called CLIP [1].Here, we ask whether CLIP can explain the fMRI activity of the human hippocampus better than a purely visual (or linguistic) model. We extend our analysis to a range of publicly available uni- and multi-modal models. We demonstrate thatmultimodality'' stands out as a key component when assessing the ability of a network to explain the multivoxel activity in the hippocampus.

Chat is not available.