Variational Entropy Search is Just 1D Regression
Michael Pearce · Thomas Pollak · Luke Hudlass-Galley
Abstract
Variational Entropy Search (VES) is a recently proposed class of acquisition functions for Bayesian optimization (BO) that unifies Expected Improvement and Max-Value Entropy Search (MES). In BO, a Gaussian process fit to a set of observed data $\mathcal{D}$ and in MES, at a given input $x\in\mathbb{R}^d$, one samples scalar output values $y\sim\mathbb{P}[y|x, \mathcal{D}]$, then samples functions that fit both $\mathcal{D}$ and $(x, y)$ and finds the peaks of these functions $y^* $. These $y^\*$ samples are conditioned on $y$ and come from a non-trivial distribution. Given $x$, the MES goal is to estimate how much the potential $y$ value may reduce entropy of this distribution. The VES goal, is to instead learn a variational approximation $q(y^\*|y)$ that helps estimate a lower bound to MES. In this work, for a given point $x$ we reinterpret VES as a simple $1$-dimensional _frequentist_ regression problem from $y$ to $y^*$. By framing prior work in this perspective, we explore possible improvements, including generalizing VES to noisy objectives. We explore a variety of simple $1$D regression models in BO benchmarks on synthetic data, highlighting significant open questions for future research.
Chat is not available.
Successful Page Load