Timezone: »
Spotlight Poster
Regularization properties of adversarially-trained linear regression
Antonio Ribeiro · Dave Zachariah · Francis Bach · Thomas Schön
Event URL: https://github.com/antonior92/advtrain-linreg »
State-of-the-art machine learning models can be vulnerable to very small input perturbations that are adversarially constructed. Adversarial training is an effective approach to defend against it. Formulated as a min-max problem, it searches for the best solution when the training data were corrupted by the worst-case attacks. Linear models are among the simple models where vulnerabilities can be observed and are the focus of our study. In this case, adversarial training leads to a convex optimization problem which can be formulated as the minimization of a finite sum. We provide a comparative analysis between the solution of adversarial training in linear regression and other regularization methods. Our main findings are that: (A) Adversarial training yields the minimum-norm interpolating solution in the overparameterized regime (more parameters than data), as long as the maximum disturbance radius is smaller than a threshold. And, conversely, the minimum-norm interpolator is the solution to adversarial training with a given radius. (B) Adversarial training can be equivalent to parameter shrinking methods (ridge regression and Lasso). This happens in the underparametrized region, for an appropriate choice of adversarial radius and zero-mean symmetrically distributed covariates. (C) For $\ell_\infty$-adversarial training---as in square-root Lasso---the choice of adversarial radius for optimal bounds does not depend on the additive noise variance. We confirm our theoretical findings with numerical examples.
State-of-the-art machine learning models can be vulnerable to very small input perturbations that are adversarially constructed. Adversarial training is an effective approach to defend against it. Formulated as a min-max problem, it searches for the best solution when the training data were corrupted by the worst-case attacks. Linear models are among the simple models where vulnerabilities can be observed and are the focus of our study. In this case, adversarial training leads to a convex optimization problem which can be formulated as the minimization of a finite sum. We provide a comparative analysis between the solution of adversarial training in linear regression and other regularization methods. Our main findings are that: (A) Adversarial training yields the minimum-norm interpolating solution in the overparameterized regime (more parameters than data), as long as the maximum disturbance radius is smaller than a threshold. And, conversely, the minimum-norm interpolator is the solution to adversarial training with a given radius. (B) Adversarial training can be equivalent to parameter shrinking methods (ridge regression and Lasso). This happens in the underparametrized region, for an appropriate choice of adversarial radius and zero-mean symmetrically distributed covariates. (C) For $\ell_\infty$-adversarial training---as in square-root Lasso---the choice of adversarial radius for optimal bounds does not depend on the additive noise variance. We confirm our theoretical findings with numerical examples.
Author Information
Antonio Ribeiro (Uppsala University)
Dave Zachariah (Uppsala University)
Francis Bach (INRIA - Ecole Normale Superieure)
Francis Bach is a researcher at INRIA, leading since 2011 the SIERRA project-team, which is part of the Computer Science Department at Ecole Normale Supérieure in Paris, France. After completing his Ph.D. in Computer Science at U.C. Berkeley, he spent two years at Ecole des Mines, and joined INRIA and Ecole Normale Supérieure in 2007. He is interested in statistical machine learning, and especially in convex optimization, combinatorial optimization, sparse methods, kernel-based learning, vision and signal processing. He gave numerous courses on optimization in the last few years in summer schools. He has been program co-chair for the International Conference on Machine Learning in 2015.
Thomas Schön (Uppsala University)
More from the Same Authors
-
2022 Poster: A Non-asymptotic Analysis of Non-parametric Temporal-Difference Learning »
Eloïse Berthier · Ziad Kobeissi · Francis Bach -
2023 : On Feature Learning of Recursive Feature Machines and Automatic Relevance Determination »
Daniel Gedon · Amirhesam Abedsoltan · Thomas Schön · Misha Belkin -
2023 : On Feature Learning of Recursive Feature Machines and Automatic Relevance Determination »
Daniel Gedon · Amirhesam Abedsoltan · Thomas Schön · Misha Belkin -
2023 Poster: On the impact of activation and normalization in obtaining isometric embeddings at initialization »
Amir Joudaki · Hadi Daneshmand · Francis Bach -
2023 Poster: Differentiable Clustering with Perturbed Spanning Forests »
Lawrence Stewart · Francis Bach · Felipe Llinares-Lopez · Quentin Berthet -
2022 Spotlight: Lightning Talks 1A-4 »
Siwei Wang · Jing Liu · Nianqiao Ju · Shiqian Li · Eloïse Berthier · Muhammad Faaiz Taufiq · Arsene Fansi Tchango · Chen Liang · Chulin Xie · Jordan Awan · Jean-Francois Ton · Ziad Kobeissi · Wenguan Wang · Xinwang Liu · Kewen Wu · Rishab Goel · Jiaxu Miao · Suyuan Liu · Julien Martel · Ruobin Gong · Francis Bach · Chi Zhang · Rob Cornish · Sanmi Koyejo · Zhi Wen · Yee Whye Teh · Yi Yang · Jiaqi Jin · Bo Li · Yixin Zhu · Vinayak Rao · Wenxuan Tu · Gaetan Marceau Caron · Arnaud Doucet · Xinzhong Zhu · Joumana Ghosn · En Zhu -
2022 Spotlight: A Non-asymptotic Analysis of Non-parametric Temporal-Difference Learning »
Eloïse Berthier · Ziad Kobeissi · Francis Bach -
2022 Poster: Variational inference via Wasserstein gradient flows »
Marc Lambert · Sinho Chewi · Francis Bach · Silvère Bonnabel · Philippe Rigollet -
2022 Poster: Asynchronous SGD Beats Minibatch SGD Under Arbitrary Delays »
Konstantin Mishchenko · Francis Bach · Mathieu Even · Blake Woodworth -
2022 Poster: On the Theoretical Properties of Noise Correlation in Stochastic Optimization »
Aurelien Lucchi · Frank Proske · Antonio Orvieto · Francis Bach · Hans Kersting -
2022 Poster: Fast Stochastic Composite Minimization and an Accelerated Frank-Wolfe Algorithm under Parallelization »
Benjamin Dubois-Taine · Francis Bach · Quentin Berthet · Adrien Taylor -
2022 Poster: Active Labeling: Streaming Stochastic Gradients »
Vivien Cabannes · Francis Bach · Vianney Perchet · Alessandro Rudi -
2020 : Francis Bach - Where is Machine Learning Going? »
Francis Bach -
2020 Poster: Learning Robust Decision Policies from Observational Data »
Muhammad Osama · Dave Zachariah · Peter Stoica -
2019 Poster: Prediction of Spatial Point Processes: Regularized Method with Out-of-Sample Guarantees »
Muhammad Osama · Dave Zachariah · Peter Stoica -
2019 Poster: Calibration tests in multi-class classification: A unifying framework »
David Widmann · Fredrik Lindsten · Dave Zachariah -
2019 Spotlight: Calibration tests in multi-class classification: A unifying framework »
David Widmann · Fredrik Lindsten · Dave Zachariah -
2019 Poster: Robust exploration in linear quadratic reinforcement learning »
Jack Umenberger · Mina Ferizbegovic · Thomas Schön · Håkan Hjalmarsson -
2019 Spotlight: Robust exploration in linear quadratic reinforcement learning »
Jack Umenberger · Mina Ferizbegovic · Thomas Schön · Håkan Hjalmarsson -
2018 Poster: Learning convex bounds for linear quadratic control policy synthesis »
Jack Umenberger · Thomas Schön -
2018 Spotlight: Learning convex bounds for linear quadratic control policy synthesis »
Jack Umenberger · Thomas Schön -
2017 : Concluding remarks »
Francis Bach · Benjamin Guedj · Pascal Germain -
2017 : Neil Lawrence, Francis Bach and François Laviolette »
Neil Lawrence · Francis Bach · Francois Laviolette -
2017 : Sharp asymptotic and finite-sample rates of convergence of empirical measures in Wasserstein distance »
Francis Bach -
2017 : Overture »
Benjamin Guedj · Francis Bach · Pascal Germain -
2017 Workshop: (Almost) 50 shades of Bayesian Learning: PAC-Bayesian trends and insights »
Benjamin Guedj · Pascal Germain · Francis Bach -
2017 Poster: On Structured Prediction Theory with Calibrated Convex Surrogate Losses »
Anton Osokin · Francis Bach · Simon Lacoste-Julien -
2017 Oral: On Structured Prediction Theory with Calibrated Convex Surrogate Losses »
Anton Osokin · Francis Bach · Simon Lacoste-Julien -
2017 Poster: Linearly constrained Gaussian processes »
Carl Jidling · Niklas Wahlström · Adrian Wills · Thomas Schön -
2017 Poster: Nonlinear Acceleration of Stochastic Algorithms »
Damien Scieur · Francis Bach · Alexandre d'Aspremont -
2017 Poster: Integration Methods and Optimization Algorithms »
Damien Scieur · Vincent Roulet · Francis Bach · Alexandre d'Aspremont -
2016 : Francis Bach. Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression. »
Francis Bach -
2016 : Submodular Functions: from Discrete to Continuous Domains »
Francis Bach -
2016 Tutorial: Large-Scale Optimization: Beyond Stochastic Gradient Descent and Convexity »
Suvrit Sra · Francis Bach -
2014 Poster: Sequential Monte Carlo for Graphical Models »
Christian Andersson Naesseth · Fredrik Lindsten · Thomas Schön -
2013 Poster: Bayesian Inference and Learning in Gaussian Process State-Space Models with Particle MCMC »
Roger Frigola · Fredrik Lindsten · Thomas Schön · Carl Edward Rasmussen -
2012 Poster: Ancestor Sampling for Particle Gibbs »
Fredrik Lindsten · Michael Jordan · Thomas Schön