Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Causal Representation Learning

A Sparsity Principle for Partially Observable Causal Representation Learning

Danru Xu · Dingling Yao · S├ębastien Lachapelle · Perouz Taslakian · Julius von K├╝gelgen · Francesco Locatello · Sara Magliacane

Keywords: [ Partial Observability ] [ causal representation learning ]


Abstract:

Causal representation learning (CRL) aims at identifying high-level causal variables from low-level data, e.g. images. Current methods usually assume that all causal variables are captured in the high-dimensional observations. In this work, we focus on learning causal representations from data under partial observability, i.e., when some of the causal variables are not observed in the measurements, and the set of masked variables changes across the different samples. We introduce some initial theoretical results for identifying causal variables under partial observability by exploiting a sparsity regularizer, focusing in particular on the linear and piecewise linear mixing function case. We provide a theorem that allows us to identify the causal variables up to permutation and element-wise linear transformations in the linear case and a lemma that allows us to identify causal variables up to linear transformation in the piecewise case. Finally, we provide a conjecture that would allow us to identify the causal variables up to permutation and element-wise linear transformations also in the piecewise linear case.We test the theorem and conjecture on simulated data, showing the effectiveness of our method.

Chat is not available.