Timezone: »

 
Poster
Model Selection in Contextual Stochastic Bandit Problems
Aldo Pacchiano · My Phan · Yasin Abbasi Yadkori · Anup Rao · Julian Zimmert · Tor Lattimore · Csaba Szepesvari

Thu Dec 10 09:00 AM -- 11:00 AM (PST) @ Poster Session 5 #1455
We study bandit model selection in stochastic environments. Our approach relies on a master algorithm that selects between candidate base algorithms. We develop a master-base algorithm abstraction that can work with general classes of base algorithms and different type of adversarial master algorithms. Our methods rely on a novel and generic smoothing transformation for bandit algorithms that permits us to obtain optimal $O(\sqrt{T})$ model selection guarantees for stochastic contextual bandit problems as long as the optimal base algorithm satisfies a high probability regret guarantee. We show through a lower bound that even when one of the base algorithms has $O(\log T)$ regret, in general it is impossible to get better than $\Omega(\sqrt{T})$ regret in model selection, even asymptotically. Using our techniques, we address model selection in a variety of problems such as misspecified linear contextual bandits \citep{lattimore2019learning}, linear bandit with unknown dimension \citep{Foster-Krishnamurthy-Luo-2019} and reinforcement learning with unknown feature maps. Our algorithm requires the knowledge of the optimal base regret to adjust the master learning rate. We show that without such prior knowledge any master can suffer a regret larger than the optimal base regret.

Author Information

Aldo Pacchiano (UC Berkeley)
My Phan (University of Massachusetts Amherst)
Yasin Abbasi Yadkori (VinAI Research/ VinTech JSC.,)
Anup Rao (School of Computer Science, Georgia Tech)
Julian Zimmert (Google)
Tor Lattimore (DeepMind)
Csaba Szepesvari (DeepMind / University of Alberta)

More from the Same Authors