Skip to yearly menu bar Skip to main content


Poster

Better Full-Matrix Regret via Parameter-Free Online Learning

Ashok Cutkosky

Poster Session 3 #1022

Abstract:

We provide online convex optimization algorithms that guarantee improved full-matrix regret bounds. These algorithms extend prior work in several ways. First, we seamlessly allow for the incorporation of constraints without requiring unknown oracle-tuning for any learning rate parameters. Second, we improve the regret of the full-matrix AdaGrad algorithm by suggesting a better learning rate value and showing how to tune the learning rate to this value on-the-fly. Third, all our bounds are obtained via a general framework for constructing regret bounds that depend on an arbitrary sequence of norms.

Chat is not available.