Skip to yearly menu bar Skip to main content


Poster

Bayesian Optimization with Gradients

Jian Wu · Matthias Poloczek · Andrew Wilson · Peter Frazier

Pacific Ballroom #192

Keywords: [ Hyperparameter Selection ] [ Non-Convex Optimization ] [ Gaussian Processes ] [ Bayesian Nonparametrics ]


Abstract:

Bayesian optimization has shown success in global optimization of expensive-to-evaluate multimodal objective functions. However, unlike most optimization methods, Bayesian optimization typically does not use derivative information. In this paper we show how Bayesian optimization can exploit derivative information to find good solutions with fewer objective function evaluations. In particular, we develop a novel Bayesian optimization algorithm, the derivative-enabled knowledge-gradient (dKG), which is one-step Bayes-optimal, asymptotically consistent, and provides greater one-step value of information than in the derivative-free setting. dKG accommodates noisy and incomplete derivative information, comes in both sequential and batch forms, and can optionally reduce the computational cost of inference through automatically selected retention of a single directional derivative. We also compute the dKG acquisition function and its gradient using a novel fast discretization-free technique. We show dKG provides state-of-the-art performance compared to a wide range of optimization procedures with and without gradients, on benchmarks including logistic regression, deep learning, kernel learning, and k-nearest neighbors.

Live content is unavailable. Log in and register to view live content