Skip to yearly menu bar Skip to main content


Spotlight Poster

Feature Adaptation for Sparse Linear Regression

Jonathan Kelner · Frederic Koehler · Raghu Meka · Dhruv Rohatgi

Great Hall & Hall B1+B2 (level 1) #1726

Abstract: Sparse linear regression is a central problem in high-dimensional statistics. We study the correlated random design setting, where the covariates are drawn from a multivariate Gaussian N(0,Σ), and we seek an estimator with small excess risk. If the true signal is t-sparse, information-theoretically, it is possible to achieve strong recovery guarantees with only O(tlogn) samples. However, computationally efficient algorithms have sample complexity linear in (some variant of) the *condition number* of Σ. Classical algorithms such as the Lasso can require significantly more samples than necessary even if there is only a single sparse approximate dependency among the covariates.We provide a polynomial-time algorithm that, given Σ, automatically adapts the Lasso to tolerate a small number of approximate dependencies. In particular, we achieve near-optimal sample complexity for constant sparsity and if Σ has few outlier'' eigenvalues.Our algorithm fits into a broader framework of *feature adaptation* for sparse linear regression with ill-conditioned covariates. With this framework, we additionally provide the first polynomial-factor improvement over brute-force search for constant sparsity t and arbitrary covariance Σ.

Chat is not available.