We consider a high dimensional linear regression problem where the goal is to efficiently recover an unknown vector \beta^* from n noisy linear observations Y=X \beta^+W in R^n, for known X in R^{n \times p} and unknown W in R^n. Unlike most of the literature on this model we make no sparsity assumption on \beta^. Instead we adopt a regularization based on assuming that the underlying vectors \beta^* have rational entries with the same denominator Q. We call this Qrationality assumption. We propose a new polynomialtime algorithm for this task which is based on the seminal LenstraLenstraLovasz (LLL) lattice basis reduction algorithm. We establish that under the Qrationality assumption, our algorithm recovers exactly the vector \beta^* for a large class of distributions for the iid entries of X and nonzero noise W. We prove that it is successful under small noise, even when the learner has access to only one observation (n=1). Furthermore, we prove that in the case of the Gaussian white noise for W, n=o(p/\log p) and Q sufficiently large, our algorithm tolerates a nearly optimal informationtheoretic level of the noise.
Author Information
Ilias Zadik (MIT)
I am in my fifth and final year of my PhD at the MIT Operations Research Center, where I am advised by Prof. David Gamarnik. My primary research interests are in high dimensional statistics and theory of machine learning.
David Gamarnik (Massachusetts Institute of Technology)
More from the Same Authors

2019 Poster: Sparse HighDimensional Isotonic Regression »
David Gamarnik · Julia Gaudio 
2014 Poster: Hardness of parameter estimation in graphical models »
Guy Bresler · David Gamarnik · Devavrat Shah 
2014 Poster: Structure learning of antiferromagnetic Ising models »
Guy Bresler · David Gamarnik · Devavrat Shah