\textsc{LeonArDBO}: Fast and Prior-Driven Bayesian Optimization without Surrogate Modeling
Efe Mert Karagözlü · Conor Igoe · Barnabas Poczos · Jeff Schneider
Abstract
In many real-world black-box optimization problems, practitioners know that the maximizer exists in a rather small subset of the search space, yet most common Bayesian Optimization (BO) frameworks do not allow them to input their prior knowledge over the maximizer. In addition, although the goal of BO is only to find the optimizer, BO surrogate models typically model the distribution of the whole latent function, which may introduce a computational burden. Motivated by these, we propose \textsc{LeonArDBO}, a novel approach to BO in which we only update the distribution of the argmax directly given the new observation in the surrogate modeling step, using a neural network to learn to do such updates. This not only enables custom priors over the optimum, but also results in $\mathcal{O} (n)$-time updates in the number of samples, in contrast to exact Gaussian Process (GP) updates with $\mathcal{O} (n^3)$-time. We analyze our method's performance empirically on synthetic functions as well as a real scientific problem where large language models (LLMs) can provide useful priors.
Chat is not available.
Successful Page Load