Skip to yearly menu bar Skip to main content


Poster

Scalable Global Optimization via Local Bayesian Optimization

David Eriksson · Michael Pearce · Jacob Gardner · Ryan Turner · Matthias Poloczek

East Exhibition Hall B + C #9

Keywords: [ AutoML ] [ Algorithms ] [ Gaussian Processes ] [ Probabilistic Methods ]


Abstract:

Bayesian optimization has recently emerged as a popular method for the sample-efficient optimization of expensive black-box functions. However, the application to high-dimensional problems with several thousand observations remains challenging, and on difficult problems Bayesian optimization is often not competitive with other paradigms. In this paper we take the view that this is due to the implicit homogeneity of the global probabilistic models and an overemphasized exploration that results from global acquisition. This motivates the design of a local probabilistic approach for global optimization of large-scale high-dimensional problems. We propose the TuRBO algorithm that fits a collection of local models and performs a principled global allocation of samples across these models via an implicit bandit approach. A comprehensive evaluation demonstrates that TuRBO outperforms state-of-the-art methods from machine learning and operations research on problems spanning reinforcement learning, robotics, and the natural sciences.

Live content is unavailable. Log in and register to view live content