Skip to yearly menu bar Skip to main content


Poster

Online Learning with Sublinear Best-Action Queries

Matteo Russo · Andrea Celli · Riccardo Colini Baldeschi · Federico Fusco · Daniel Haimovich · Dima Karamshuk · Stefano Leonardi · Niek Tax

West Ballroom A-D #5700
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract: In online learning, a decision maker repeatedly selects one of a set of actions, with the goal of minimizing the overall loss incurred. Following the recent line of research on algorithms endowed with additional predictive features, we revisit this problem by allowing the decision maker to acquire additional information on the actions to be selected. In particular, we study the power of \emph{best-action queries}, which reveal beforehand the identity of the best action at a given time step. In practice, predictive features may be expensive, so we allow the decision maker to issue at most $k$ such queries.We establish tight bounds on the performance any algorithm can achieve when given access to $k$ best-action queries for different types of feedback models. In particular, we prove that in the full feedback model, $k$ queries are enough to achieve an optimal regret of $\Theta(\min\{\sqrt T, \frac{T}{k}\})$. This finding highlights the significant multiplicative advantage in the regret rate achievable with even a modest (sublinear) number $k \in \Omega(\sqrt{T})$ of queries. Additionally, we study the challenging setting in which the only available feedback is obtained during the time steps corresponding to the $k$ best-action queries. There, we provide a tight regret rate of $\Theta(\min\{\frac{T}{\sqrt k},\frac{T^2}{k^2}\})$, which improves over the standard $\Theta(\frac{T}{\sqrt k})$ regret rate for label efficient prediction for $k \in \Omega(T^{2/3})$.

Live content is unavailable. Log in and register to view live content