Timezone: »
Thompson Sampling (TS) from Gaussian Process (GP) models is a powerful tool for the optimization of black-box functions. Although TS enjoys strong theoretical guarantees and convincing empirical performance, it incurs a large computational overhead that scales polynomially with the optimization budget. Recently, scalable TS methods based on sparse GP models have been proposed to increase the scope of TS, enabling its application to problems that are sufficiently multi-modal, noisy or combinatorial to require more than a few hundred evaluations to be solved. However, the approximation error introduced by sparse GPs invalidates all existing regret bounds. In this work, we perform a theoretical and empirical analysis of scalable TS. We provide theoretical guarantees and show that the drastic reduction in computational complexity of scalable TS can be enjoyed without loss in the regret performance over the standard TS. These conceptual claims are validated for practical implementations of scalable TS on synthetic benchmarks and as part of a real-world high-throughput molecular design task.
Author Information
Sattar Vakili (MediaTek Research)
Henry Moss (Secondmind)
I am a Senior Machine Learning Researcher at Secondmind (formerly PROWLER.io). I leverage information-theoretic arguments to provide efficient, reliable and scalable Bayesian optimisation for problems inspired by science and the automotive industry.
Artem Artemev (Imperial College London)
Vincent Dutordoir (University of Cambridge)
Victor Picheny (Prowler)
More from the Same Authors
-
2021 Meetup: Cambridge, UK »
Vincent Dutordoir -
2022 : Fantasizing with Dual GPs in Bayesian Optimization and Active Learning »
Paul Chang · Prakhar Verma · ST John · Victor Picheny · Henry Moss · Arno Solin -
2022 : Recommendations for Baselines and Benchmarking Approximate Gaussian Processes »
Sebastian Ober · David Burt · Artem Artemev · Mark van der Wilk -
2022 : GAUCHE: A Library for Gaussian Processes in Chemistry »
Ryan-Rhys Griffiths · Leo Klarner · Henry Moss · Aditya Ravuri · Sang Truong · Bojana Rankovic · Yuanqi Du · Arian Jamasb · Julius Schwartz · Austin Tripp · Gregory Kell · Anthony Bourached · Alex Chan · Jacob Moss · Chengzhi Guo · Alpha Lee · Philippe Schwaller · Jian Tang -
2022 : Gradient Descent: Robustness to Adversarial Corruption »
Fu-Chieh Chang · Farhang Nabiei · Pei-Yuan Wu · Alexandru Cioba · Sattar Vakili · Alberto Bernacchia -
2023 Poster: Kerenlized Reinforcement Learning with Order Optimal Regret Bounds »
Sattar Vakili · Iuliia Olkhovskaia -
2023 Poster: Geometric Neural Diffusion Processes »
Emile Mathieu · Vincent Dutordoir · Michael Hutchinson · Valentin De Bortoli · Yee Whye Teh · Richard Turner -
2022 : Poster Session 2 »
Jinwuk Seok · Bo Liu · Ryotaro Mitsuboshi · David Martinez-Rubio · Weiqiang Zheng · Ilgee Hong · Chen Fan · Kazusato Oko · Bo Tang · Miao Cheng · Aaron Defazio · Tim G. J. Rudner · Gabriele Farina · Vishwak Srinivasan · Ruichen Jiang · Peng Wang · Jane Lee · Nathan Wycoff · Nikhil Ghosh · Yinbin Han · David Mueller · Liu Yang · Amrutha Varshini Ramesh · Siqi Zhang · Kaifeng Lyu · David Yunis · Kumar Kshitij Patel · Fangshuo Liao · Dmitrii Avdiukhin · Xiang Li · Sattar Vakili · Jiaxin Shi -
2022 Poster: Near-Optimal Collaborative Learning in Bandits »
Clémence Réda · Sattar Vakili · Emilie Kaufmann -
2022 Poster: Memory safe computations with XLA compiler »
Artem Artemev · Yuze An · Tilman Roeder · Mark van der Wilk -
2021 Poster: A Domain-Shrinking based Bayesian Optimization Algorithm with Order-Optimal Regret Performance »
Sudeep Salgia · Sattar Vakili · Qing Zhao -
2021 Poster: Optimal Order Simple Regret for Gaussian Process Bandits »
Sattar Vakili · Nacime Bouziani · Sepehr Jalali · Alberto Bernacchia · Da-shan Shiu -
2021 Poster: Deep Neural Networks as Point Estimates for Deep Gaussian Processes »
Vincent Dutordoir · James Hensman · Mark van der Wilk · Carl Henrik Ek · Zoubin Ghahramani · Nicolas Durrande -
2018 Poster: Gaussian Process Conditional Density Estimation »
Vincent Dutordoir · Hugh Salimbeni · James Hensman · Marc Deisenroth