Skip to yearly menu bar Skip to main content


Poster

Identifiability Guarantees for Causal Disentanglement from Soft Interventions

Jiaqi Zhang · Kristjan Greenewald · Chandler Squires · Akash Srivastava · Karthikeyan Shanmugam · Caroline Uhler

Great Hall & Hall B1+B2 (level 1) #916

Abstract:

Causal disentanglement aims to uncover a representation of data using latent variables that are interrelated through a causal model. Such a representation is identifiable if the latent model that explains the data is unique. In this paper, we focus on the scenario where unpaired observational and interventional data are available, with each intervention changing the mechanism of a latent variable. When the causal variables are fully observed, statistically consistent algorithms have been developed to identify the causal model under faithfulness assumptions. We here show that identifiability can still be achieved with unobserved causal variables, given a generalized notion of faithfulness. Our results guarantee that we can recover the latent causal model up to an equivalence class and predict the effect of unseen combinations of interventions, in the limit of infinite data. We implement our causal disentanglement framework by developing an autoencoding variational Bayes algorithm and apply it to the problem of predicting combinatorial perturbation effects in genomics.

Chat is not available.