Poster
Natasha 2: Faster Non-Convex Optimization Than SGD
Zeyuan Allen-Zhu
Room 210 #50
Keywords: [ Learning Theory ] [ Non-Convex Optimization ]
[
Abstract
]
Abstract:
We design a stochastic algorithm to find -approximate local minima of any smooth nonconvex function in rate , with only oracle access to stochastic gradients. The best result before this work was by stochastic gradient descent (SGD).
Live content is unavailable. Log in and register to view live content