Timezone: »

 
Poster
Local Bayesian optimization via maximizing probability of descent
Quan Nguyen · Kaiwen Wu · Jacob Gardner · Roman Garnett

Thu Dec 01 02:00 PM -- 04:00 PM (PST) @ Hall J #412

Local optimization presents a promising approach to expensive, high-dimensional black-box optimization by sidestepping the need to globally explore the search space. For objective functions whose gradient cannot be evaluated directly, Bayesian optimization offers one solution -- we construct a probabilistic model of the objective, design a policy to learn about the gradient at the current location, and use the resulting information to navigate the objective landscape. Previous work has realized this scheme by minimizing the variance in the estimate of the gradient, then moving in the direction of the expected gradient. In this paper, we re-examine and refine this approach. We demonstrate that, surprisingly, the expected value of the gradient is not always the direction maximizing the probability of descent, and in fact, these directions may be nearly orthogonal. This observation then inspires an elegant optimization scheme seeking to maximize the probability of descent while moving in the direction of most-probable descent. Experiments on both synthetic and real-world objectives show that our method outperforms previous realizations of this optimization scheme and is competitive against other, significantly more complicated baselines.

Author Information

Quan Nguyen (Washington University, St. Louis)
Quan Nguyen

I am a fourth-year Ph.D. student in Computer Science at the McKelvey School of Engineering at Washington University in St. Louis, advised by Prof. Roman Garnett. My research interests are in Bayesian machine learning, active search, and general decision-making under uncertainty to accelerate and automate scientific discovery.

Kaiwen Wu (University of Pennsylvania)
Jacob Gardner (University of Pennsylvania)
Roman Garnett (Washington University in St. Louis)

More from the Same Authors