Timezone: »

 
Poster
A Linearly Convergent Method for Non-Smooth Non-Convex Optimization on the Grassmannian with Applications to Robust Subspace and Dictionary Learning
Zhihui Zhu · Tianyu Ding · Daniel Robinson · Manolis Tsakiris · René Vidal

Thu Dec 12 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #194

Minimizing a non-smooth function over the Grassmannian appears in many applications in machine learning. In this paper we show that if the objective satisfies a certain Riemannian regularity condition with respect to some point in the Grassmannian, then a Riemannian subgradient method with appropriate initialization and geometrically diminishing step size converges at a linear rate to that point. We show that for both the robust subspace learning method Dual Principal Component Pursuit (DPCP) and the Orthogonal Dictionary Learning (ODL) problem, the Riemannian regularity condition is satisfied with respect to appropriate points of interest, namely the subspace orthogonal to the sought subspace for DPCP and the orthonormal dictionary atoms for ODL. Consequently, we obtain in a unified framework significant improvements for the convergence theory of both methods.

Author Information

Zhihui Zhu (Johns Hopkins University)
Tianyu Ding (Johns Hopkins University)
Daniel Robinson (Johns Hopkins University)
Manolis Tsakiris (ShanghaiTech University)
René Vidal (Mathematical Institute for Data Science Johns Hopkins University)

More from the Same Authors