Skip to yearly menu bar Skip to main content


Poster

Are sample means in multi-armed bandits positively or negatively biased?

Jaehyeok Shin · Aaditya Ramdas · Alessandro Rinaldo

East Exhibition Hall B, C #12

Keywords: [ Bandit Algorithms ] [ Algorithms ] [ Frequentist Statistics ] [ Algorithms -> Adaptive Data Analysis; Theory ]


Abstract:

It is well known that in stochastic multi-armed bandits (MAB), the sample mean of an arm is typically not an unbiased estimator of its true mean. In this paper, we decouple three different sources of this selection bias: adaptive \emph{sampling} of arms, adaptive \emph{stopping} of the experiment, and adaptively \emph{choosing} which arm to study. Through a new notion called ``optimism'' that captures certain natural monotonic behaviors of algorithms, we provide a clean and unified analysis of how optimistic rules affect the sign of the bias. The main takeaway message is that optimistic sampling induces a negative bias, but optimistic stopping and optimistic choosing both induce a positive bias. These results are derived in a general stochastic MAB setup that is entirely agnostic to the final aim of the experiment (regret minimization or best-arm identification or anything else). We provide examples of optimistic rules of each type, demonstrate that simulations confirm our theoretical predictions, and pose some natural but hard open problems.

Live content is unavailable. Log in and register to view live content