Timezone: »
The technical term “robust” was coined in 1953 by G. E. P. Box and exemplifies his adage, “all models are wrong, but some are useful”. Over the past decade, a broad range of new paradigms have appeared that allow useful inference when standard modeling assumptions are violated. Classic examples include heavy tailed formulations that mitigate the effect of outliers which would otherwise degrade the performance of Gaussianbased methods.
Highdimensional data are becoming ubiquitous in diverse domains such as genomics, neuroimaging, economics, and finance. Such data exacerbate the relevance of robustness as errors and model misspecification are prevalent in such modern applications. To extract pertinent information from large scale data, robust formulations require a comprehensive understanding of machine learning, optimization, and statistical signal processing, thereby integrating recovery guarantees, statistical and computational efficiency, algorithm design and scaling issues. For example, robust Principal Component Analysis (RPCA) can be approached using both convex and nonconvex formulations, giving rise to tradeoffs between computational efficiency and theoretical guarantees.
The goal of this workshop is to bring together machine learning, highdimensional statistics, optimization and select largescale applications, in order to investigate the interplay between robust modeling and computation in the largescale setting. We incorporate several important examples that are strongly linked by this theme:
(a) Low rank matrix recovery, robust PCA, and robust dictionary learning: Highdimensional data problems where the number of variables may greatly exceed the number of observations can be accurately solved by leveraging lowdimensional structural constraints upon the parameters to be estimated. For matrixstructured parameters, lowrank recovery is a prime example of such lowdimensional assumption. To efficiently recover the lowrank structure characterizing the data, Robust PCA extends classical PCA in order to accommodate grossly corrupted observations that have become ubiquitous in modern applications. Sparse coding and dictionary learning build upon the fact that many realworld data can be represented as a sparse linear combination of basis vectors over an overcomplete dictionary and aims at learning such an efficient representation of the data. Sparse coding and dictionary learning are being used in a variety of tasks including image denoising and inpainting, texture synthesis, image classification and unusual event detection.
(b) Robust inference for large scale inverse problems and machine learning: Many data commonly encountered are heavytailed where the Gaussian assumption does not apply. The issue of robustness has been largely overlooked in the highdimensional learning literature, yet this aspect is critical when dealing with high dimensional noisy data. Traditional likelihoodbased estimators (including Lasso and Group Lasso) are known to lack resilience to outliers and model misspecification. Despite this fact, there has been limited focus on robust learning methods in highdimensional modeling.
(c) Nonconvex formulations: heavy tails, factorized matrix inversion, nonlinear forward models. Combining robustness with statistical efficiency requires nonconvexity of the loss function. Surprisingly, it is often possible to show that either certain nonconvex problems have exact convex relaxations, or that algorithms directly solving nonconvex problems may produce points that are statistically indistinguishable from the global optimum.
(d) Robust optimization: avoiding overfitting on precise but unreliable parameters. This classic topic has become increasingly relevant as researchers purposefully perturb problems. This perturbation comes in many forms: from “sketching” functions with JohnsonLindenstrausslike transformations, using randomized algorithms to speed up linear algebra, using randomized coordinate descent, and/or stochastic gradient algorithms. Recently the techniques of robust optimization have been applied to these situations.
It is the aim of this workshop to bring together researchers from statistics, machine learning, optimization, and applications, in order to focus on a comprehensive understanding of robust modeling and computation. In particular, we will see challenges of implementing robust formulations in the largescale and nonconvex setting, as well as examples of success in these areas.
The workshop follows in the footsteps if the “Robust ML” workshop at NIPS in 2010. The field is very active and there have been significant advances in the past 4 years. We also expect to have new topics, such as new applications of robust optimization to userperturbed problems and Markov Decision Processes.
Author Information
Aurelie Lozano (IBM Research)
Aleksandr Y Aravkin (IBM TJ Watson Research Center)
Stephen Becker (University of Colorado)
More from the Same Authors

2021 Poster: Adaptive Proximal Gradient Methods for Structured Neural Networks »
Jihun Yun · Aurelie Lozano · Eunho Yang 
2015 Poster: Closedform Estimators for Highdimensional Generalized Linear Models »
Eunho Yang · Aurelie Lozano · Pradeep Ravikumar 
2015 Spotlight: Closedform Estimators for Highdimensional Generalized Linear Models »
Eunho Yang · Aurelie Lozano · Pradeep Ravikumar 
2015 Poster: Robust Gaussian Graphical Modeling with the Trimmed Graphical Lasso »
Eunho Yang · Aurelie Lozano 
2014 Poster: QUIC & DIRTY: A Quadratic Approximation Approach for Dirty Statistical Models »
ChoJui Hsieh · Inderjit Dhillon · Pradeep Ravikumar · Stephen Becker · Peder A Olsen 
2014 Session: Oral Session 10 »
Aurelie Lozano 
2014 Poster: Elementary Estimators for Graphical Models »
Eunho Yang · Aurelie Lozano · Pradeep Ravikumar 
2014 Poster: TimeData Tradeoffs by Aggressive Smoothing »
John J Bruer · Joel A Tropp · Volkan Cevher · Stephen Becker 
2011 Poster: Nonparametric Group Orthogonal Matching Pursuit for Sparse Learning with Multiple Kernels »
Vikas Sindhwani · Aurelie Lozano 
2010 Workshop: Practical Application of Sparse Modeling: Open Issues and New Directions »
Irina Rish · Alexandru NiculescuMizil · Guillermo Cecchi · Aurelie Lozano 
2010 Poster: Block Variable Selection in Multivariate Regression and Highdimensional Causal Inference »
Aurelie Lozano · Vikas Sindhwani 
2009 Poster: Grouped Orthogonal Matching Pursuit for Variable Selection and Prediction »
Aurelie Lozano · Grzegorz M Swirszcz · Naoki Abe