Timezone: »
As in standard linear regression, in truncated linear regression, we are given access to observations (Ai, yi)i whose dependent variable equals yi= Ai^{\rm T} \cdot x^* + \etai, where x^* is some fixed unknown vector of interest and \etai is independent noise; except we are only given an observation if its dependent variable yi lies in some "truncation set" S \subset \mathbb{R}. The goal is to recover x^* under some favorable conditions on the Ai's and the noise distribution. We prove that there exists a computationally and statistically efficient method for recovering k-sparse n-dimensional vectors x^* from m truncated samples, which attains an optimal \ell2 reconstruction error of O(\sqrt{(k \log n)/m}). As a corollary, our guarantees imply a computationally efficient and information-theoretically optimal algorithm for compressed sensing with truncation, such as that which may arise from measurement saturation effects. Our result follows from a statistical and computational analysis of the Stochastic Gradient Descent (SGD) algorithm for solving a natural adaption of the LASSO optimization problem that accommodates truncation. This generalizes the works of both: (1) [Daskalakis et al. 2018], where no regularization is needed due to the low dimensionality of the data, and (2) [Wainright 2009], where the objective function is simple due to the absence of truncation. In order to deal with both truncation and high-dimensionality at the same time, we develop new techniques that not only generalize the existing ones but we believe are of independent interest.
Author Information
Constantinos Daskalakis (MIT)
Dhruv Rohatgi (Massachusetts Institute of Technology)
Emmanouil Zampetakis (Massachusetts Institute of Technology)
More from the Same Authors
-
2021 : Estimation of Standard Asymmetric Auction Models »
Yeshwanth Cherapanamjeri · Constantinos Daskalakis · Andrew Ilyas · Emmanouil Zampetakis -
2021 : Near-Optimal No-Regret Learning in General Games »
Constantinos Daskalakis · Maxwell Fishelson · Noah Golowich -
2021 : Robust Algorithms for GMM Estimation: A Finite Sample Viewpoint »
Dhruv Rohatgi -
2021 : Estimation of Standard Asymmetric Auction Models »
Yeshwanth Cherapanamjeri · Constantinos Daskalakis · Andrew Ilyas · Emmanouil Zampetakis -
2021 : Near-Optimal No-Regret Learning in General Games »
Constantinos Daskalakis · Maxwell Fishelson · Noah Golowich -
2021 : Spotlight 4: Estimation of Standard Asymmetric Auction Models »
Yeshwanth Cherapanamjeri · Constantinos Daskalakis · Andrew Ilyas · Emmanouil Zampetakis -
2021 : Zoom Q&A for Contributed talks Session 3 »
Dhruv Rohatgi -
2021 : Contributed talks Session 3 »
Dhruv Rohatgi -
2021 Poster: Near-Optimal No-Regret Learning in General Games »
Constantinos Daskalakis · Maxwell Fishelson · Noah Golowich -
2021 Poster: Efficient Truncated Linear Regression with Unknown Noise Variance »
Constantinos Daskalakis · Patroklos Stefanou · Rui Yao · Emmanouil Zampetakis -
2021 Oral: Near-Optimal No-Regret Learning in General Games »
Constantinos Daskalakis · Maxwell Fishelson · Noah Golowich -
2020 Poster: Tight last-iterate convergence rates for no-regret learning in multi-player games »
Noah Golowich · Sarath Pattathil · Constantinos Daskalakis -
2020 Poster: Optimal Approximation - Smoothness Tradeoffs for Soft-Max Functions »
Alessandro Epasto · Mohammad Mahdian · Vahab Mirrokni · Emmanouil Zampetakis -
2020 Spotlight: Optimal Approximation - Smoothness Tradeoffs for Soft-Max Functions »
Alessandro Epasto · Mohammad Mahdian · Vahab Mirrokni · Emmanouil Zampetakis -
2020 Poster: Constant-Expansion Suffices for Compressed Sensing with Generative Priors »
Constantinos Daskalakis · Dhruv Rohatgi · Emmanouil Zampetakis -
2020 Spotlight: Constant-Expansion Suffices for Compressed Sensing with Generative Priors »
Constantinos Daskalakis · Dhruv Rohatgi · Emmanouil Zampetakis -
2020 Poster: Independent Policy Gradient Methods for Competitive Reinforcement Learning »
Constantinos Daskalakis · Dylan Foster · Noah Golowich -
2018 : Improving Generative Adversarial Networks using Game Theory and Statistics »
Constantinos Daskalakis -
2018 Poster: Learning and Testing Causal Models with Interventions »
Jayadev Acharya · Arnab Bhattacharyya · Constantinos Daskalakis · Saravanan Kandasamy -
2018 Poster: Smoothed Analysis of Discrete Tensor Decomposition and Assemblies of Neurons »
Nima Anari · Constantinos Daskalakis · Wolfgang Maass · Christos Papadimitriou · Amin Saberi · Santosh Vempala -
2018 Poster: HOGWILD!-Gibbs can be PanAccurate »
Constantinos Daskalakis · Nishanth Dikkala · Siddhartha Jayanti -
2018 Poster: The Limit Points of (Optimistic) Gradient Descent in Min-Max Optimization »
Constantinos Daskalakis · Ioannis Panageas -
2017 Poster: Concentration of Multilinear Functions of the Ising Model with Applications to Network Data »
Constantinos Daskalakis · Nishanth Dikkala · Gautam Kamath -
2015 Poster: Optimal Testing for Properties of Distributions »
Jayadev Acharya · Constantinos Daskalakis · Gautam Kamath -
2015 Spotlight: Optimal Testing for Properties of Distributions »
Jayadev Acharya · Constantinos Daskalakis · Gautam Kamath