Skip to yearly menu bar Skip to main content


Workshop

NeurIPS'24 Workshop on Causal Representation Learning

Guangyi Chen · Sara Magliacane · Zhijing Jin · Biwei Huang · Francesco Locatello · Peter Spirtes · Kun Zhang

MTG 10

Sun 15 Dec, 8:15 a.m. PST

Advanced Artificial Intelligence (AI) techniques based on deep representations, such as GPT and Stable Diffusion, have demonstrated exceptional capabilities in analyzing vast amounts of data and generating coherent responses from unstructured data. They achieve this through sophisticated architectures that capture subtle relationships and dependencies. However, these models predominantly identify dependencies rather than establishing and making use of causal relationships. This can lead to potential spurious correlations and algorithmic bias, limiting the models’ interpretability and trustworthiness.In contrast, traditional causal discovery methods aim to identify causal relationships within observed data in an unsupervised manner. While these methods show promising results in scenarios with fully observed data, they struggle to handle complex real-world situations where causal effects occur in latent spaces when handling images, videos, and possibly text.Recently, causal representation learning (CRL) has made significant progress in addressing the aforementioned challenges, demonstrating great potential in understanding the causal relationships underlying observed data. These techniques are expected to enable researchers to identify latent causal variables and discern the relationships among them, which provides an efficient way to disentangle representations and enhance the reliability and interpretability of models.The goal of this workshop is to explore the challenges and opportunities in this field, discuss recent progress, and identify open questions, and provide a platform to inpire cross-disciplinary collaborations.

Live content is unavailable. Log in and register to view live content