Poster
Stochastic and Adversarial Online Learning without Hyperparameters
Ashok Cutkosky · Kwabena A Boahen
Pacific Ballroom #53
Keywords: [ Online Learning ] [ Convex Optimization ] [ Hyperparameter Selection ]
[
Abstract
]
Abstract:
Most online optimization algorithms focus on one of two things: performing well in adversarial settings by adapting to unknown data parameters (such as Lipschitz constants), typically achieving $O(\sqrt{T})$ regret, or performing well in stochastic settings where they can leverage some structure in the losses (such as strong convexity), typically achieving $O(\log(T))$ regret. Algorithms that focus on the former problem hitherto achieved $O(\sqrt{T})$ in the stochastic setting rather than $O(\log(T))$. Here we introduce an online optimization algorithm that achieves $O(\log^4(T))$ regret in a wide class of stochastic settings while gracefully degrading to the optimal $O(\sqrt{T})$ regret in adversarial settings (up to logarithmic factors). Our algorithm does not require any prior knowledge about the data or tuning of parameters to achieve superior performance.
Live content is unavailable. Log in and register to view live content