Skip to yearly menu bar Skip to main content


Beyond the Best: Distribution Functional Estimation in Infinite-Armed Bandits

Yifei Wang · Tavor Baharav · Yanjun Han · Jiantao Jiao · David Tse

Hall J (level 1) #724

Keywords: [ Multi-armed Bandits ] [ functional estimation ] [ information-theoretic lower bounds ] [ algorithm design ]


In the infinite-armed bandit problem, each arm's average reward is sampled from an unknown distribution, and each arm can be sampled further to obtain noisy estimates of the average reward of that arm. Prior work focuses on the best arm, i.e. estimating the maximum of the average reward distribution. We consider a general class of distribution functionals beyond the maximum and obtain optimal sample complexities in both offline and online settings. We show that online estimation, where the learner can sequentially choose whether to sample a new or existing arm, offers no advantage over the offline setting for estimating the mean functional, but significantly reduces the sample complexity for other functionals such as the median, maximum, and trimmed mean. We propose unified meta algorithms for the online and offline settings and derive matching lower bounds using different Wasserstein distances. For the special case of median estimation, we identify a curious thresholding phenomenon on the indistinguishability between Gaussian convolutions with respect to the noise level, which may be of independent interest.

Chat is not available.