Skip to yearly menu bar Skip to main content


Poster

Exponentially convergent stochastic k-PCA without variance reduction

Cheng Tang

East Exhibition Hall B + C #200

Keywords: [ Algorithms -> Online Learning; Algorithms -> Representation Learning; Applications -> Time Series Analysis; Optimization ] [ Non ] [ Unsupervised Learning ] [ Algorithms ]


Abstract:

We present Matrix Krasulina, an algorithm for online k-PCA, by gen- eralizing the classic Krasulina’s method (Krasulina, 1969) from vector to matrix case. We show, both theoretically and empirically, that the algorithm naturally adapts to data low-rankness and converges exponentially fast to the ground-truth principal subspace. Notably, our result suggests that despite various recent efforts to accelerate the convergence of stochastic-gradient based methods by adding a O(n)-time variance reduction step, for the k- PCA problem, a truly online SGD variant suffices to achieve exponential convergence on intrinsically low-rank data.

Live content is unavailable. Log in and register to view live content