Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Causal Machine Learning for Real-World Impact

Beyond Central Limit Theorem for Higher Order Inference in Batched Bandits

Yechan Park · Ruohan Zhan · Nakahiro Yoshida


Abstract:

Adaptive experiments have been gaining traction in a variety of domains, which stimulates a growing literature focusing on post-experimental statistical inference on data collected from such designs. Prior work constructs confidence intervals mainly based on two types of methods: (i) martingale concentration inequalities and (ii) asymptotic approximation to distribution of test statistics; this work contributes to the second kind. The current asymptotic approximation methods however mostly rely on first-order limit theorems, which can have a slow convergence rate in a data-poor regime. Besides, established results often rely on conditions that noises are well-behaved, which can be problematic when the real-world instances are heavy-tailed or asymmetric. In this paper, we propose the first higher-order asymptotic expansion formula for inference on adaptively collected data, which generalizes normal approximation to the distribution of standard test statistics. Our theorem relaxes assumptions on the noise distribution and benefits a fast convergence rate to accommodate small sample sizes. We complement our results by promising empirical performances in simulations.

Chat is not available.