Skip to yearly menu bar Skip to main content


Poster

Dimension-Free Exponentiated Gradient

Francesco Orabona

Harrah's Special Events Center, 2nd Floor

Abstract: We present a new online learning algorithm that extends the exponentiated gradient to infinite dimensional spaces. Our analysis shows that the algorithm is implicitly able to estimate the L2L2 norm of the unknown competitor, UU, achieving a regret bound of the order of O(Ulog(UT+1))T)O(Ulog(UT+1))T), instead of the standard O((U2+1)T)O((U2+1)T), achievable without knowing UU. For this analysis, we introduce novel tools for algorithms with time-varying regularizers, through the use of local smoothness. Through a lower bound, we also show that the algorithm is optimal up to logTlogT term for linear and Lipschitz losses.

Live content is unavailable. Log in and register to view live content