Uplifting Bandits

Yu-Guan Hsieh · Shiva Kasiviswanathan · Branislav Kveton

Hall J #731

Keywords: [ regret minimization ] [ Structure bandits ] [ Uplift ]

[ Abstract ]
[ Paper [ OpenReview
Tue 29 Nov 9 a.m. PST — 11 a.m. PST


We introduce a new multi-armed bandit model where the reward is a sum of multiple random variables, and each action only alters the distributions of some of these variables. Upon taking an action, the agent observes the realizations of all variables. This model is motivated by marketing campaigns and recommender systems, where the variables represent outcomes on individual customers, such as clicks. We propose UCB-style algorithms that estimate the uplifts of the actions over a baseline. We study multiple variants of the problem, including when the baseline and affected variables are unknown, and prove sublinear regret bounds for all of these. In addition, we provide regret lower bounds that justify the necessity of our modeling assumptions. Experiments on synthetic and real-world datasets demonstrate the benefit of methods that estimate the uplifts over policies that do not use this structure.

Chat is not available.