Timezone: »
Consider the setting of Linear Regression where the observed response variables, in expectation, are linear functions of the pdimensional covariates. Then to achieve vanishing prediction error, the number of required samples scales faster than pσ2, where σ2 is a bound on the noise variance. In a highdimensional setting where p is large but the covariates admit a lowdimensional representation (say r ≪ p), then Principal Component Regression (PCR), cf. [36], is an effective approach; here, the response variables are regressed with respect to the principal components of the covariates. The resulting number of required samples to achieve vanishing prediction error now scales faster than rσ2(≪ pσ2). Despite the tremendous utility of PCR, its ability to handle settings with noisy, missing, and mixed (discrete and continuous) valued covariates is not understood and remains an important open challenge, cf. [24]. As the main contribution of this work, we address this challenge by rigorously establishing that PCR is robust to noisy, sparse, and possibly mixed valued covariates. Specifically, under PCR, vanishing prediction error is achieved with the number of samples scaling as r max(σ2, ρ−4 log5(p)), where ρ denotes the fraction of observed (noisy) covariates. We establish generalization error bounds on the performance of PCR, which provides a systematic approach in selecting the correct number of components r in a datadriven manner. The key to our result is a simple, but powerful equivalence between (i) PCR and (ii) Linear Regression with covariate preprocessing via Hard Singular Value Thresholding (HSVT). From a technical standpoint, this work advances the stateoftheart analysis for HSVT by establishing stronger guarantees with respect to the ∥·∥2,∞error for the estimated matrix rather than the Frobenius norm/meansquared error (MSE) as is commonly done in the matrix estimation / completion literature.
Author Information
Anish Agarwal (MIT)
Devavrat Shah (Massachusetts Institute of Technology)
Devavrat Shah is a professor of Electrical Engineering & Computer Science and Director of Statistics and Data Science at MIT. He received PhD in Computer Science from Stanford. He received Erlang Prize from Applied Probability Society of INFORMS in 2010 and NeuIPS best paper award in 2008.
Dennis Shen (Massachusetts Institute of Technology)
Dogyoon Song (Massachusetts Institute of Technology)
Related Events (a corresponding poster, oral, or spotlight)

2019 Oral: On Robustness of Principal Component Regression »
Fri Dec 13th 12:25  12:40 AM Room West Exhibition Hall C + B3
More from the Same Authors

2019 Tutorial: Synthetic Control »
Alberto Abadie · Vishal Misra · Devavrat Shah 
2018 Poster: Qlearning with Nearest Neighbors »
Devavrat Shah · Qiaomin Xie 
2017 Workshop: Nearest Neighbors for Modern Applications with Massive Data: An Ageold Solution with New Challenges »
George H Chen · Devavrat Shah · Christina Lee 
2017 Poster: Thy Friend is My Friend: Iterative Collaborative Filtering for Sparse Matrix Estimation »
Christian Borgs · Jennifer Chayes · Christina Lee · Devavrat Shah 
2016 Poster: Blind Regression: Nonparametric Regression for Latent Variable Models via Collaborative Filtering »
Dogyoon Song · Christina Lee · Yihua Li · Devavrat Shah 
2014 Workshop: Analysis of Rank Data: Confluence of Social Choice, Operations Research, and Machine Learning »
Shivani Agarwal · Hossein Azari Soufiani · Guy Bresler · Sewoong Oh · David Parkes · Arun Rajkumar · Devavrat Shah 
2014 Poster: Hardness of parameter estimation in graphical models »
Guy Bresler · David Gamarnik · Devavrat Shah 
2014 Poster: A Latent Source Model for Online Collaborative Filtering »
Guy Bresler · George H Chen · Devavrat Shah 
2014 Spotlight: A Latent Source Model for Online Collaborative Filtering »
Guy Bresler · George H Chen · Devavrat Shah 
2014 Poster: Learning Mixed Multinomial Logit Model from Ordinal Data »
Sewoong Oh · Devavrat Shah 
2014 Poster: Structure learning of antiferromagnetic Ising models »
Guy Bresler · David Gamarnik · Devavrat Shah 
2013 Workshop: Crowdsourcing: Theory, Algorithms and Applications »
Jennifer Wortman Vaughan · Greg Stoddard · ChienJu Ho · Adish Singla · Michael Bernstein · Devavrat Shah · Arpita Ghosh · Evgeniy Gabrilovich · Denny Zhou · Nikhil Devanur · Xi Chen · Alexander Ihler · Qiang Liu · Genevieve Patterson · Ashwinkumar Badanidiyuru Varadaraja · Hossein Azari Soufiani · Jacob Whitehill 
2013 Poster: A Latent Source Model for Nonparametric Time Series Classification »
George H Chen · Stanislav Nikolov · Devavrat Shah 
2013 Poster: Computing the Stationary Distribution Locally »
Christina Lee · Asuman Ozdaglar · Devavrat Shah 
2012 Poster: Iterative ranking from pairwise comparisons »
Sahand N Negahban · Sewoong Oh · Devavrat Shah 
2012 Spotlight: Iterative ranking from pairwise comparisons »
Sahand N Negahban · Sewoong Oh · Devavrat Shah 
2011 Poster: Iterative Learning for Reliable Crowdsourcing Systems »
David R Karger · Sewoong Oh · Devavrat Shah 
2011 Oral: Iterative Learning for Reliable Crowdsourcing Systems »
David R Karger · Sewoong Oh · Devavrat Shah 
2009 Poster: A DataDriven Approach to Modeling Choice »
Vivek Farias · Srikanth Jagabathula · Devavrat Shah 
2009 Spotlight: A DataDriven Approach to Modeling Choice »
Vivek Farias · Srikanth Jagabathula · Devavrat Shah 
2009 Poster: Local Rules for Global MAP: When Do They Work ? »
Kyomin Jung · Pushmeet Kohli · Devavrat Shah 
2008 Poster: Inferring rankings under constrained sensing »
Srikanth Jagabathula · Devavrat Shah 
2008 Oral: Inferring rankings under constrained sensing »
Srikanth Jagabathula · Devavrat Shah 
2007 Spotlight: Message Passing for Maxweight Independent Set »
Sujay Sanghavi · Devavrat Shah · Alan S Willsky 
2007 Poster: Message Passing for Maxweight Independent Set »
Sujay Sanghavi · Devavrat Shah · Alan S Willsky 
2007 Poster: Local Algorithms for Approximate Inference in MinorExcluded Graphs »
Kyomin Jung · Devavrat Shah