Timezone: »

Are sample means in multi-armed bandits positively or negatively biased?
Jaehyeok Shin · Aaditya Ramdas · Alessandro Rinaldo

Wed Dec 11 05:00 PM -- 07:00 PM (PST) @ East Exhibition Hall B + C #12

It is well known that in stochastic multi-armed bandits (MAB), the sample mean of an arm is typically not an unbiased estimator of its true mean. In this paper, we decouple three different sources of this selection bias: adaptive \emph{sampling} of arms, adaptive \emph{stopping} of the experiment, and adaptively \emph{choosing} which arm to study. Through a new notion called ``optimism'' that captures certain natural monotonic behaviors of algorithms, we provide a clean and unified analysis of how optimistic rules affect the sign of the bias. The main takeaway message is that optimistic sampling induces a negative bias, but optimistic stopping and optimistic choosing both induce a positive bias. These results are derived in a general stochastic MAB setup that is entirely agnostic to the final aim of the experiment (regret minimization or best-arm identification or anything else). We provide examples of optimistic rules of each type, demonstrate that simulations confirm our theoretical predictions, and pose some natural but hard open problems.

Author Information

Jaehyeok Shin (Carnegie Mellon University)
Aaditya Ramdas (Carnegie Mellon University)
Alessandro Rinaldo (CMU)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors