Timezone: »
Poster
Finding a sparse vector in a subspace: Linear sparsity using alternating directions
Qing Qu · Ju Sun · John Wright
We consider the problem of recovering the sparsest vector in a subspace $ \mathcal{S} \in \mathbb{R}^p $ with $ \text{dim}(\mathcal{S})=n$. This problem can be considered a homogeneous variant of the sparse recovery problem, and finds applications in sparse dictionary learning, sparse PCA, and other problems in signal processing and machine learning. Simple convex heuristics for this problem provably break down when the fraction of nonzero entries in the target sparse vector substantially exceeds $1/ \sqrt{n}$. In contrast, we exhibit a relatively simple nonconvex approach based on alternating directions, which provably succeeds even when the fraction of nonzero entries is $\Omega(1)$. To our knowledge, this is the first practical algorithm to achieve this linear scaling. This result assumes a planted sparse model, in which the target sparse vector is embedded in an otherwise random subspace. Empirically, our proposed algorithm also succeeds in more challenging data models arising, e.g., from sparse dictionary learning.
Author Information
Qing Qu (University of Michigan)
Ju Sun (Stanford University)
John Wright (Columbia University)
More from the Same Authors
-
2021 Poster: Square Root Principal Component Pursuit: Tuning-Free Noisy Robust Matrix Recovery »
Junhui Zhang · Jingkai Yan · John Wright -
2021 Poster: Deep Networks Provably Classify Data on Curves »
Tingran Wang · Sam Buchanan · Dar Gilboa · John Wright -
2012 Workshop: Analysis Operator Learning vs. Dictionary Learning: Fraternal Twins in Sparse Modeling »
Martin Kleinsteuber · Francis Bach · Remi Gribonval · John Wright · Simon Hawe -
2012 Poster: Learning with Partially Absorbing Random Walks »
Xiao-Ming Wu · Zhenguo Li · Shih-Fu Chang · John Wright · Anthony Man-Cho So