Poster
Exploiting Numerical Sparsity for Efficient Learning : Faster Eigenvector Computation and Regression
Neha Gupta · Aaron Sidford
Room 210 #58
Keywords: [ Regression ] [ Components Analysis (e.g., CCA, ICA, LDA, PCA) ]
[
Abstract
]
Abstract:
In this paper, we obtain improved running times for regression and top eigenvector computation for numerically sparse matrices. Given a data matrix where every row has and numerical sparsity , i.e. , we provide faster algorithms for these problems for many parameter settings.
For top eigenvector computation, when is the relative gap between the top two eigenvectors of and is the stable rank of we obtain a running time of improving upon the previous best unaccelerated running time of . As and our algorithm everywhere improves or matches the previous bounds for all parameter settings.
For regression, when is the smallest eigenvalue of we obtain a running time of improving upon the previous best unaccelerated running time of . This result expands when regression can be solved in nearly linear time from when to when .
Furthermore, we obtain similar improvements even when row norms and numerical sparsities are non-uniform and we show how to achieve even faster running times by accelerating using approximate proximal point \cite{frostig2015regularizing} / catalyst \cite{lin2015universal}. Our running times depend only on the size of the input and natural numerical measures of the matrix, i.e. eigenvalues and norms, making progress on a key open problem regarding optimal running times for efficient large-scale learning.
Live content is unavailable. Log in and register to view live content