Timezone: »
The presence of missing values makes supervised learning much more challenging. Indeed, previous work has shown that even when the response is a linear function of the complete data, the optimal predictor is a complex function of the observed entries and the missingness indicator. As a result, the computational or sample complexities of consistent approaches depend on the number of missing patterns, which can be exponential in the number of dimensions. In this work, we derive the analytical form of the optimal predictor under a linearity assumption and various missing data mechanisms including Missing at Random (MAR) and self-masking (Missing Not At Random). Based on a Neumann-series approximation of the optimal predictor, we propose a new principled architecture, named NeuMiss networks. Their originality and strength come from the use of a new type of non-linearity: the multiplication by the missingness indicator. We provide an upper bound on the Bayes risk of NeuMiss networks, and show that they have good predictive accuracy with both a number of parameters and a computational complexity independent of the number of missing data patterns. As a result they scale well to problems with many features, and remain statistically efficient for medium-sized samples. Moreover, we show that, contrary to procedures using EM or imputation, they are robust to the missing data mechanism, including difficult MNAR settings such as self-masking.
Author Information
Marine Le Morvan (INRIA)
Julie Josse (INRIA/CMAP)
Thomas Moreau (Inria)
Erwan Scornet (Ecole Polytechnique)
Gael Varoquaux (Parietal Team, INRIA)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Poster: NeuMiss networks: differentiable programming for supervised learning with missing values. »
Wed. Dec 9th 05:00 -- 07:00 PM Room Poster Session 3 #1087
More from the Same Authors
-
2021 Spotlight: What’s a good imputation to predict with missing values? »
Marine Le Morvan · Julie Josse · Erwan Scornet · Gael Varoquaux -
2021 : AI as statistical methods for imperfect theories »
Gael Varoquaux -
2022 Poster: Benchopt: Reproducible, efficient and collaborative optimization benchmarks »
Thomas Moreau · Mathurin Massias · Alexandre Gramfort · Pierre Ablin · Pierre-Antoine Bannier · Benjamin Charlier · Mathieu Dagréou · Tom Dupre la Tour · Ghislain DURIF · Cassio F. Dantas · Quentin Klopfenstein · Johan Larsson · En Lai · Tanguy Lefort · Benoît Malézieux · Badr MOUFAD · Binh T. Nguyen · Alain Rakotomamonjy · Zaccharie Ramzi · Joseph Salmon · Samuel Vaiter -
2022 Poster: Deep invariant networks with differentiable augmentation layers »
Cédric ROMMEL · Thomas Moreau · Alexandre Gramfort -
2022 Poster: A framework for bilevel optimization that enables stochastic and global variance reduction algorithms »
Mathieu Dagréou · Pierre Ablin · Samuel Vaiter · Thomas Moreau -
2022 Poster: Why do tree-based models still outperform deep learning on typical tabular data? »
Leo Grinsztajn · Edouard Oyallon · Gael Varoquaux -
2021 Poster: What’s a good imputation to predict with missing values? »
Marine Le Morvan · Julie Josse · Erwan Scornet · Gael Varoquaux -
2020 Poster: Learning to solve TV regularised problems with unrolled algorithms »
Hamza Cherkaoui · Jeremias Sulam · Thomas Moreau -
2020 Poster: Estimation and Imputation in Probabilistic Principal Component Analysis with Missing Not At Random Data »
Aude Sportisse · Claire Boyer · Julie Josse -
2020 Poster: Debiasing Averaged Stochastic Gradient Descent to handle missing values »
Aude Sportisse · Claire Boyer · Aymeric Dieuleveut · Julie Josse -
2020 Session: Orals & Spotlights Track 19: Probabilistic/Causality »
Julie Josse · Jasper Snoek -
2019 Poster: Learning step sizes for unfolded sparse coding »
Pierre Ablin · Thomas Moreau · Mathurin Massias · Alexandre Gramfort -
2019 Poster: Comparing distributions: $\ell_1$ geometry improves kernel two-sample testing »
Meyer Scetbon · Gael Varoquaux -
2019 Spotlight: Comparing distributions: $\ell_1$ geometry improves kernel two-sample testing »
Meyer Scetbon · Gael Varoquaux -
2019 Poster: Manifold-regression to predict from MEG/EEG brain signals without source modeling »
David Sabbagh · Pierre Ablin · Gael Varoquaux · Alexandre Gramfort · Denis A. Engemann -
2018 Poster: Multivariate Convolutional Sparse Coding for Electromagnetic Brain Signals »
Tom Dupré la Tour · Thomas Moreau · Mainak Jas · Alexandre Gramfort -
2017 : Scikit-learn & nilearn: Democratisation of machine learning for brain imaging (INRIA) »
Gael Varoquaux -
2017 : Invited Talk: "Tales from fMRI: Learning from limited labeled data" »
Gael Varoquaux -
2017 Poster: Universal consistency and minimax rates for online Mondrian Forests »
Jaouad Mourtada · Stéphane Gaïffas · Erwan Scornet -
2017 Poster: Learning Neural Representations of Human Cognition across Many fMRI Studies »
Arthur Mensch · Julien Mairal · Danilo Bzdok · Bertrand Thirion · Gael Varoquaux -
2016 Poster: Learning brain regions via large-scale online structured sparse dictionary learning »
Elvis DOHMATOB · Arthur Mensch · Gael Varoquaux · Bertrand Thirion -
2015 Poster: Semi-Supervised Factored Logistic Regression for High-Dimensional Neuroimaging Data »
Danilo Bzdok · Michael Eickenberg · Olivier Grisel · Bertrand Thirion · Gael Varoquaux -
2013 Poster: Mapping paradigm ontologies to and from the brain »
Yannick Schwartz · Bertrand Thirion · Gael Varoquaux