Skip to yearly menu bar Skip to main content

Workshop: Causal Representation Learning

Attention for Causal Relationship Discovery from Biological Neural Dynamics

Ziyu Lu · Anika Tabassum · Shruti Kulkarni · Lu Mi · Nathan Kutz · Eric Shea-Brown · Seung-Hwan Lim

Keywords: [ Neuroscience ] [ Causal relationship discovery ] [ transformers ]


This paper explores the potential of the transformer models for causal representation learning in networks with complex nonlinear dynamics at every node, as in neurobiological and biophysical networks. Our study primarily focuses on a proof-of-concept investigation based on simulated neural dynamics, for which the ground-truth causality is known through the underlying connectivity matrix. For transformer models trained to forecast neuronal population dynamics, we show that the cross attention module effectively captures the causal relationship among neurons, with an accuracy equal or superior to that for the most popular causality discovery method. While we acknowledge that real-world neurobiology data will bring further challenges, including dynamic connectivity and unobserved variability, this research offers an encouraging preliminary glimpse into the utility of the transformer model for causal representation learning in neuroscience.

Chat is not available.