Skip to yearly menu bar Skip to main content

Workshop: Adaptive Experimental Design and Active Learning in the Real World

Data-driven Prior Learning for Bayesian Optimisation

Sigrid Passano Hellan · Christopher G Lucas · Nigel Goddard


Transfer learning for Bayesian optimisation has generally assumed a strong similarity between optimisation tasks, with at least a subset sharing similar optima. This assumption can reduce computational costs, but it is violated in a wide range of optimisation problems where transfer learning may nonetheless be useful. We replace this assumption with a weaker one only requiring the shape of the optimisation landscape to be similar, and analyse the recent method Prior Learning for Bayesian Optimisation — PLeBO — in this setting. By learning priors for the hyperparameters of the Gaussian process surrogate model we can better approximate the underlying function, especially for few function evaluations. We validate the learned priors and compare to a breadth of transfer learning approaches, using synthetic data and a recent air pollution optimisation problem as benchmarks. We show that PLeBO and prior transfer more generally find good inputs in fewer evaluations.

Chat is not available.