Poster
Sample Complexity of Learning Mixture of Sparse Linear Regressions
Akshay Krishnamurthy · Arya Mazumdar · Andrew McGregor · Soumyabrata Pal
East Exhibition Hall B, C #51
Keywords: [ Algorithms ] [ Sparsity and Compressed Sensing ] [ Algorithms -> Density Estimation; Algorithms -> Regression; Theory ] [ Information Theory ]
In the problem of learning mixtures of linear regressions, the goal is to learn a col-lection of signal vectors from a sequence of (possibly noisy) linear measurements,where each measurement is evaluated on an unknown signal drawn uniformly fromthis collection. This setting is quite expressive and has been studied both in termsof practical applications and for the sake of establishing theoretical guarantees. Inthis paper, we consider the case where the signal vectors aresparse; this generalizesthe popular compressed sensing paradigm. We improve upon the state-of-the-artresults as follows: In the noisy case, we resolve an open question of Yin et al. (IEEETransactions on Information Theory, 2019) by showing how to handle collectionsof more than two vectors and present the first robust reconstruction algorithm, i.e.,if the signals are not perfectly sparse, we still learn a good sparse approximationof the signals. In the noiseless case, as well as in the noisy case, we show how tocircumvent the need for a restrictive assumption required in the previous work. Ourtechniques are quite different from those in the previous work: for the noiselesscase, we rely on a property of sparse polynomials and for the noisy case, we providenew connections to learning Gaussian mixtures and use ideas from the theory of
Live content is unavailable. Log in and register to view live content