Timezone: »
Independent component analysis provides a principled framework for unsupervised representation learning, with solid theory on the identifiability of the latent code that generated the data, given only observations of mixtures thereof. Unfortunately, when the mixing is nonlinear, the model is provably nonidentifiable, since statistical independence alone does not sufficiently constrain the problem. Identifiability can be recovered in settings where additional, typically observed variables are included in the generative process. We investigate an alternative path and consider instead including assumptions reflecting the principle of independent causal mechanisms exploited in the field of causality. Specifically, our approach is motivated by thinking of each source as independently influencing the mixing process. This gives rise to a framework which we term independent mechanism analysis. We provide theoretical and empirical evidence that our approach circumvents a number of nonidentifiability issues arising in nonlinear blind source separation.
Author Information
Luigi Gresele (MPI for Intelligent Systems, Tübingen)
Julius von Kügelgen (Max Planck Institute for Intelligent Systems Tübingen & University of Cambridge)
Vincent Stimper (University of Cambridge)
Bernhard Schölkopf (MPI for Intelligent Systems, Tübingen)
Michel Besserve (MPI for Intelligent Systems, Tübingen)
More from the Same Authors
-
2021 Spotlight: Iterative Teaching by Label Synthesis »
Weiyang Liu · Zhen Liu · Hanchen Wang · Liam Paull · Bernhard Schölkopf · Adrian Weller -
2021 Spotlight: DiBS: Differentiable Bayesian Structure Learning »
Lars Lorch · Jonas Rothfuss · Bernhard Schölkopf · Andreas Krause -
2021 : Boxhead: A Dataset for Learning Hierarchical Representations »
Yukun Chen · Andrea Dittadi · Frederik Träuble · Stefan Bauer · Bernhard Schölkopf -
2021 : Julius von Kügelgen - Independent mechanism analysis, a new concept? »
Julius von Kügelgen -
2021 Poster: Dynamic Inference with Neural Interpreters »
Nasim Rahaman · Muhammad Waleed Gondal · Shruti Joshi · Peter Gehler · Yoshua Bengio · Francesco Locatello · Bernhard Schölkopf -
2021 Poster: Causal Influence Detection for Improving Efficiency in Reinforcement Learning »
Maximilian Seitzer · Bernhard Schölkopf · Georg Martius -
2021 Poster: Iterative Teaching by Label Synthesis »
Weiyang Liu · Zhen Liu · Hanchen Wang · Liam Paull · Bernhard Schölkopf · Adrian Weller -
2021 Poster: The Inductive Bias of Quantum Kernels »
Jonas Kübler · Simon Buchholz · Bernhard Schölkopf -
2021 Poster: Backward-Compatible Prediction Updates: A Probabilistic Approach »
Frederik Träuble · Julius von Kügelgen · Matthäus Kleindessner · Francesco Locatello · Bernhard Schölkopf · Peter Gehler -
2021 Poster: Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style »
Julius von Kügelgen · Yash Sharma · Luigi Gresele · Wieland Brendel · Bernhard Schölkopf · Michel Besserve · Francesco Locatello -
2021 Poster: DiBS: Differentiable Bayesian Structure Learning »
Lars Lorch · Jonas Rothfuss · Bernhard Schölkopf · Andreas Krause -
2021 Poster: Regret Bounds for Gaussian-Process Optimization in Large Domains »
Manuel Wuethrich · Bernhard Schölkopf · Andreas Krause -
2020 Poster: Modeling Shared responses in Neuroimaging Studies through MultiView ICA »
Hugo Richard · Luigi Gresele · Aapo Hyvarinen · Bertrand Thirion · Alexandre Gramfort · Pierre Ablin -
2020 Spotlight: Modeling Shared responses in Neuroimaging Studies through MultiView ICA »
Hugo Richard · Luigi Gresele · Aapo Hyvarinen · Bertrand Thirion · Alexandre Gramfort · Pierre Ablin -
2020 Poster: Relative gradient optimization of the Jacobian term in unsupervised deep learning »
Luigi Gresele · Giancarlo Fissore · Adrián Javaloy · Bernhard Schölkopf · Aapo Hyvarinen -
2019 : Bernhard Schölkopf »
Bernhard Schölkopf -
2018 : Learning Independent Mechanisms »
Bernhard Schölkopf