Skip to yearly menu bar Skip to main content


Poster

Batched Gaussian Process Bandit Optimization via Determinantal Point Processes

Tarun Kathuria · Amit Deshpande · Pushmeet Kohli

Area 5+6+7+8 #129

Keywords: [ Gaussian Processes ] [ Stochastic Methods ] [ Bayesian Nonparametrics ] [ Bandit Algorithms ]


Abstract:

Gaussian Process bandit optimization has emerged as a powerful tool for optimizing noisy black box functions. One example in machine learning is hyper-parameter optimization where each evaluation of the target function may require training a model which may involve days or even weeks of computation. Most methods for this so-called “Bayesian optimization” only allow sequential exploration of the parameter space. However, it is often desirable to propose batches or sets of parameter values to explore simultaneously, especially when there are large parallel processing facilities at our disposal. Batch methods require modeling the interaction between the different evaluations in the batch, which can be expensive in complex scenarios. In this paper, we propose a new approach for parallelizing Bayesian optimization by modeling the diversity of a batch via Determinantal point processes (DPPs) whose kernels are learned automatically. This allows us to generalize a previous result as well as prove better regret bounds based on DPP sampling. Our experiments on a variety of synthetic and real-world robotics and hyper-parameter optimization tasks indicate that our DPP-based methods, especially those based on DPP sampling, outperform state-of-the-art methods.

Live content is unavailable. Log in and register to view live content