Skip to yearly menu bar Skip to main content


Workshop

Causal Learning

Martin Arjovsky · Christina Heinze-Deml · Anna Klimovskaia · Maxime Oquab · Leon Bottou · David Lopez-Paz

Room 220 C

Site for the workshop: https://sites.google.com/view/nips2018causallearning/home

The route from machine learning to artificial intelligence remains uncharted. Recent efforts describe some of the conceptual problems that lie along this route [4, 9, 12]. The goal of this workshop is to investigate how much progress is possible by framing these problems beyond learning correlations, that is, by uncovering and leveraging causal relations:

1. Machine learning algorithms solve statistical problems (e.g. maximum likelihood) as a proxy to solve tasks of interest (e.g. recognizing objects). Unfortunately, spurious correlations and biases are often easier to learn than the task itself [14], leading to unreliable or unfair predictions. This phenomenon can be framed as causal confounding.

2. Machines trained on large pools of i.i.d. data often crash confidently when deployed in different circumstances (e.g., adversarial examples, dataset biases [18]). In contrast, humans seek prediction rules robust across multiple conditions. Allowing machines to learn robust rules from multiple environments can be framed as searching for causal invariances [2, 11, 16, 17].

3. Humans benefit from discrete structures to reason. Such structures seem less useful to learning machines. For instance, neural machine translation systems outperform those that model language structure. However, the purpose of this structure might not be modeling common sentences, but to help us formulate new ones. Modeling new potential sentences rather than observed ones is a form of counterfactual reasoning [8, 9].

4. Intelligent agents do not only observe, but also shape the world with actions. Maintaining plausible causal models of the world allows to build intuitions, as well as to design intelligent experiments and interventions to test them [16, 17]. Is causal understanding necessary for efficient reinforcement learning?

5. Humans learn compositionally; after learning simple skills, we are able to recombine them quickly to solve new tasks. Such abilities have so far eluded our machine learning systems. Causal models are compositional, so they might offer a solution to this puzzle [4].

6. Finally, humans are able to digest large amounts of unsupervised signals into a causal model of the world. Humans can learn causal affordances, that is, imagining how to manipulate new objects to achieve goals, and the outcome of doing so. Humans rely on a simple blueprint for a complex world: models that contain the correct causal structures, but ignore irrelevant details [16, 17].

We cannot address these problems by simply performing inference on known causal graphs. We need to learn from data to discover plausible causal models, and to construct predictors that are robust to distributional shifts. Furthermore, much prior work has focused on estimating explicit causal structures from data, but these methods are often unscalable, rely on untestable assumptions like faithfulness or acyclicity, and are difficult to incorporate into high-dimensional, complex and nonlinear machine learning pipelines. Instead of considering the task of estimating causal graphs as their final goal, learning machines may use notions from causation indirectly to ignore biases, generalize across distributions, leverage structure to reason, design efficient interventions, benefit from compositionality, and build causal models of the world in an unsupervised way.


Call for papers

Submit your anonymous, NIPS-formatted manuscript here[https://easychair.org/cfp/NIPSCL2018]. All accepted submissions will require a poster presentation. A selection of submissions will be awarded a 5-minute spotlight presentation. We welcome conceptual, thought-provoking material, as well as research agendas, open problems, new tasks, and datasets.

Submission deadline: 28 October 2018
Acceptance notifications: 9 November 2018


Schedule:
See https://sites.google.com/view/nips2018causallearning/home for the up-to-date schedule.


Speakers:
Elias Bareinboim
David Blei
Nicolai Meinshausen
Bernhard Schölkopf
Isabelle Guyon
Csaba Szepesvari
Pietro Perona

References

1. Krzysztof Chalupka, Pietro Perona, Frederick Eberhardt (2015): Visual Causal Feature Learning [https://arxiv.org/abs/1412.2309]
2. Christina Heinze-Deml, Nicolai Meinshausen (2018): Conditional Variance Penalties and Domain Shift Robustness [https://arxiv.org/abs/1710.11469]
3. Fredrik D. Johansson, Uri Shalit, David Sontag (2016): Learning Representations for Counterfactual Inference [https://arxiv.org/abs/1605.03661]
4. Brenden Lake (2014): Towards more human-like concept learning in machines: compositionality, causality, and learning-to-learn [https://dspace.mit.edu/handle/1721.1/95856]
5. Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman (2016): Building Machines That Learn and Think Like People [https://arxiv.org/abs/1604.00289]
6. David Lopez-Paz, Krikamol Muandet, Bernhard Schölkopf, Ilya Tolstikhin (2015): Towards a Learning Theory of Cause-Effect Inference [https://arxiv.org/abs/1309.6779]
7. David Lopez-Paz, Robert Nishihara, Soumith Chintala, Bernhard Schölkopf, Léon Bottou (2017): Discovering Causal Signals in Images [https://arxiv.org/abs/1605.08179]
8. Judea Pearl (2009): Causality: Models, Reasoning, and Inference [http://bayes.cs.ucla.edu/BOOK-2K/]
9. Judea Pearl (2018): The Seven Pillars of Causal Reasoning with Reflections on Machine Learning [http://ftp.cs.ucla.edu/pub/statser/r481.pdf]
10. Jonas Peters, Joris Mooij, Dominik Janzing, Bernhard Schölkopf (2014): Causal Discovery with Continuous Additive Noise Models [https://arxiv.org/abs/1309.6779]
11. Jonas Peters, Peter Bühlmann, Nicolai Meinshausen (2016): Causal inference using invariant prediction: identification and confidence intervals [https://arxiv.org/abs/1501.01332]
12. Jonas Peters, Dominik Janzing, Bernhard Schölkopf (2017): Elements of Causal Inference: Foundations and Learning Algorithms [https://mitpress.mit.edu/books/elements-causal-inference]
13. Peter Spirtes, Clark Glymour, Richard Scheines (2001): Causation, Prediction, and Search [http://cognet.mit.edu/book/causation-prediction-and-search]
14. Bob L. Sturm (2016): The HORSE conferences [http://c4dm.eecs.qmul.ac.uk/horse2016/, http://c4dm.eecs.qmul.ac.uk/horse2017/]
15. Dustin Tran, David M. Blei (2017): Implicit Causal Models for Genome-wide Association Studies [https://arxiv.org/abs/1710.10742]
16. Michael Waldmann (2017): The Oxford Handbook of Causal Reasoning [https://global.oup.com/academic/product/the-oxford-handbook-of-causal-reasoning-9780199399550?cc=us&lang=en]
17. James Woodward (2005): Making Things Happen: A Theory of Causal Explanation [https://global.oup.com/academic/product/making-things-happen-9780195189537?cc=us&lang=en&]
18. Antonio Torralba, Alyosha Efros (2011): Unbiased look at dataset bias. [http://people.csail.mit.edu/torralba/publications/datasets
cvpr11.pdf]

Live content is unavailable. Log in and register to view live content

Timezone: America/Los_Angeles

Schedule

Log in and register to view live content