Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 5th Workshop on Meta-Learning

Efficient Automated Online Experimentation with Multi-Fidelity

Steven Kleinegesse · Zhenwen Dai · Andreas Damianou · Kamil Ciosek · Federico Tomasi


Abstract:

Prominent online experimentation approaches in industry, such as A/B testing, are often not scalable with respect to the number of candidate models. To address this shortcoming, recent work has introduced an automated online experimentation (AOE) scheme that uses a probabilistic model of user behavior to predict online performance of candidate models. While effective, these predictions of online performance may be biased due to various unforeseen circumstances, such as user modelling bias, a shift in data distribution or an incomplete set of features. In this work, we leverage advances from multi-fidelity optimization in order to combine AOE with Bayesian optimization (BO). This mitigates the effect of biased predictions, while still retaining scalability and performance. Furthermore, our approach also allows us to optimally adjust the number of users in a test cell, which is typically kept constant for online experimentation schemes, leading to a more effective allocation of resources. Our synthetic experiments show that our method yields improved performance, when compared to AOE, BO and other baseline approaches.