Timezone: »
Poster
Lower Bounds on Randomly Preconditioned Lasso via Robust Sparse Designs
Jonathan Kelner · Frederic Koehler · Raghu Meka · Dhruv Rohatgi
Sparse linear regression with ill-conditioned Gaussian random covariates is widely believed to exhibit a statistical/computational gap, but there is surprisingly little formal evidence for this belief. Recent work has shown that, for certain covariance matrices, the broad class of Preconditioned Lasso programs provably cannot succeed on polylogarithmically sparse signals with a sublinear number of samples. However, this lower bound only holds against deterministic preconditioners, and in many contexts randomization is crucial to the success of preconditioners. We prove a stronger lower bound that rules out randomized preconditioners. For an appropriate covariance matrix, we construct a single signal distribution on which any invertibly-preconditioned Lasso program fails with high probability, unless it receives a linear number of samples. Surprisingly, at the heart of our lower bound is a new robustness result in compressed sensing. In particular, we study recovering a sparse signal when a few measurements can be erased adversarially. To our knowledge, this natural question has not been studied before for sparse measurements. We surprisingly show that standard sparse Bernoulli measurements are almost-optimally robust to adversarial erasures: if $b$ measurements are erased, then all but $O(b)$ of the coordinates of the signal are identifiable.
Author Information
Jonathan Kelner (MIT)
Frederic Koehler (MIT)
Raghu Meka (UCLA)
Dhruv Rohatgi (Massachusetts Institute of Technology)
More from the Same Authors
-
2021 : Robust Algorithms for GMM Estimation: A Finite Sample Viewpoint »
Dhruv Rohatgi -
2022 : Semi-Random Sparse Recovery in Nearly-Linear Time »
Jonathan Kelner · Jerry Li · Allen Liu · Aaron Sidford · Kevin Tian -
2022 : Statistical Efficiency of Score Matching: The View from Isoperimetry »
Frederic Koehler · Alexander Heckett · Andrej Risteski -
2022 Panel: Panel 2C-7: Optimal Rates for… & Reconstruction on Trees… »
Frederic Koehler · Zhu Li -
2022 Poster: A Non-Asymptotic Moreau Envelope Theory for High-Dimensional Generalized Linear Models »
Lijia Zhou · Frederic Koehler · Pragya Sur · Danica J. Sutherland · Nati Srebro -
2022 Poster: Robust Generalized Method of Moments: A Finite Sample Viewpoint »
Dhruv Rohatgi · Vasilis Syrgkanis -
2022 Poster: Reconstruction on Trees and Low-Degree Polynomials »
Frederic Koehler · Elchanan Mossel -
2022 Poster: Sketching based Representations for Robust Image Classification with Provable Guarantees »
Nishanth Dikkala · Sankeerth Rao Karingula · Raghu Meka · Jelani Nelson · Rina Panigrahy · Xin Wang -
2022 Poster: Learning in Observable POMDPs, without Computationally Intractable Oracles »
Noah Golowich · Ankur Moitra · Dhruv Rohatgi -
2022 Poster: Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks »
Sitan Chen · Aravind Gollakota · Adam Klivans · Raghu Meka -
2021 : Zoom Q&A for Contributed talks Session 3 »
Dhruv Rohatgi -
2021 : Contributed talks Session 3 »
Dhruv Rohatgi -
2021 Oral: Uniform Convergence of Interpolators: Gaussian Width, Norm Bounds and Benign Overfitting »
Frederic Koehler · Lijia Zhou · Danica J. Sutherland · Nathan Srebro -
2021 Poster: Uniform Convergence of Interpolators: Gaussian Width, Norm Bounds and Benign Overfitting »
Frederic Koehler · Lijia Zhou · Danica J. Sutherland · Nathan Srebro -
2021 Poster: Efficiently Learning One Hidden Layer ReLU Networks From Queries »
Sitan Chen · Adam Klivans · Raghu Meka -
2020 Poster: Learning Some Popular Gaussian Graphical Models without Condition Number Bounds »
Jonathan Kelner · Frederic Koehler · Raghu Meka · Ankur Moitra -
2020 Poster: From Boltzmann Machines to Neural Networks and Back Again »
Surbhi Goel · Adam Klivans · Frederic Koehler -
2020 Spotlight: Learning Some Popular Gaussian Graphical Models without Condition Number Bounds »
Jonathan Kelner · Frederic Koehler · Raghu Meka · Ankur Moitra -
2020 Poster: Truncated Linear Regression in High Dimensions »
Constantinos Daskalakis · Dhruv Rohatgi · Emmanouil Zampetakis -
2020 Poster: Constant-Expansion Suffices for Compressed Sensing with Generative Priors »
Constantinos Daskalakis · Dhruv Rohatgi · Emmanouil Zampetakis -
2020 Poster: Classification Under Misspecification: Halfspaces, Generalized Linear Models, and Evolvability »
Sitan Chen · Frederic Koehler · Ankur Moitra · Morris Yau -
2020 Spotlight: Constant-Expansion Suffices for Compressed Sensing with Generative Priors »
Constantinos Daskalakis · Dhruv Rohatgi · Emmanouil Zampetakis -
2020 Spotlight: Classification Under Misspecification: Halfspaces, Generalized Linear Models, and Evolvability »
Sitan Chen · Frederic Koehler · Ankur Moitra · Morris Yau -
2019 Poster: Fast Convergence of Belief Propagation to Global Optima: Beyond Correlation Decay »
Frederic Koehler -
2019 Spotlight: Fast Convergence of Belief Propagation to Global Optima: Beyond Correlation Decay »
Frederic Koehler -
2017 Poster: Information Theoretic Properties of Markov Random Fields, and their Algorithmic Applications »
Linus Hamilton · Frederic Koehler · Ankur Moitra