We consider off-policy evaluation in the contextual bandit setting for the purpose of obtaining a robust off-policy selection strategy, where the selection strategy is evaluated based on the value of the chosen policy in a set of proposal (target) policies. We propose a new method to compute a lower bound on the value of an arbitrary target policy given some logged data in contextual bandits for a desired coverage. The lower bound is built around the so-called Self-normalized Importance Weighting (SN) estimator. It combines the use of a semi-empirical Efron-Stein tail inequality to control the concentration and Harris' inequality to control the bias. The new approach is evaluated on a number of synthetic and real datasets and is found to be superior to its main competitors, both in terms of tightness of the confidence intervals and the quality of the policies chosen.
Claire Vernade (DeepMind)
Claire got her PhD from Telecom ParisTech (S2A team, Olivier Cappé) in October 2017 and she is now a post-doc at Amazon CoreAI in Berlin and at the University of Magdeburg, working with Alexandra Carpentier. Her work focuses on designing and analyzing bandit models for recommendation, A/B testing and other marketing-related applications. From a larger perspective, she is interested in modeling external sources of uncertainty -- or bias -- in order to understand the impact that it may have on the complexity of the learning and on the final result.
More from the Same Authors
2021 : Panel Discussion »
Elias Bareinboim · Mark van der Laan · Claire Vernade
2019 Poster: Weighted Linear Bandits for Non-Stationary Environments »
Yoan Russac · Claire Vernade · Olivier Cappé
2016 Poster: Multiple-Play Bandits in the Position-Based Model »
Paul Lagrée · Claire Vernade · Olivier Cappe