Poster
Affine-Invariant Online Optimization and the Low-rank Experts Problem
Tomer Koren · Roi Livni
Pacific Ballroom #54
Keywords: [ Online Learning ] [ Optimization ]
[
Abstract
]
Abstract:
We present a new affine-invariant optimization algorithm called Online Lazy Newton. The regret of Online Lazy Newton is independent of conditioning: the algorithm's performance depends on the best possible preconditioning of the problem in retrospect and on its \emph{intrinsic} dimensionality. As an application, we show how Online Lazy Newton can be used to achieve an optimal regret of order $\sqrt{rT}$ for the low-rank experts problem, improving by a $\sqrt{r}$ factor over the previously best known bound and resolving an open problem posed by Hazan et al (2016).
Live content is unavailable. Log in and register to view live content