Timezone: »
Matrix square roots and their inverses arise frequently in machine learning, e.g., when sampling from high-dimensional Gaussians N(0,K) or “whitening” a vector b against covariance matrix K. While existing methods typically require O(N^3) computation, we introduce a highly-efficient quadratic-time algorithm for computing K^{1/2}b, K^{-1/2}b, and their derivatives through matrix-vector multiplication (MVMs). Our method combines Krylov subspace methods with a rational approximation and typically achieves 4 decimal places of accuracy with fewer than 100 MVMs. Moreover, the backward pass requires little additional computation. We demonstrate our method's applicability on matrices as large as 50,000 by 50,000 - well beyond traditional methods - with little approximation error. Applying this increased scalability to variational Gaussian processes, Bayesian optimization, and Gibbs sampling results in more powerful models with higher accuracy. In particular, we perform variational GP inference with up to 10,000 inducing points and perform Gibbs sampling on a 25,000-dimensional problem.
Author Information
Geoff Pleiss (Columbia University)
Martin Jankowiak (Broad Institute)
David Eriksson (Facebook)
Anil Damle (Cornell University)
Jacob Gardner (University of Pennsylvania)
More from the Same Authors
-
2022 : Efficient Variational Gaussian Processes Initialization via Kernel-based Least Squares Fitting »
Xinran Zhu · David Bindel · Jacob Gardner -
2022 : Sparse Bayesian Optimization »
Sulin Liu · Qing Feng · David Eriksson · Ben Letham · Eytan Bakshy -
2022 : Q & A »
Jacob Gardner · Virginia Aglietti · Janardhan Rao Doppa -
2022 Tutorial: Advances in Bayesian Optimization »
Janardhan Rao Doppa · Virginia Aglietti · Jacob Gardner -
2022 : Tutorial part 1 »
Jacob Gardner · Virginia Aglietti · Janardhan Rao Doppa -
2022 : Panel Discussion »
Jacob Gardner · Marta Blangiardo · Viacheslav Borovitskiy · Jasper Snoek · Paula Moraga · Carolina Osorio -
2022 Poster: Local Bayesian optimization via maximizing probability of descent »
Quan Nguyen · Kaiwen Wu · Jacob Gardner · Roman Garnett -
2022 Poster: Model Preserving Compression for Neural Networks »
Jerry Chee · Megan Flynn (née Renz) · Anil Damle · Christopher De Sa -
2022 Poster: Markov Chain Score Ascent: A Unifying Framework of Variational Inference with Markovian Gradients »
Kyurae Kim · Jisu Oh · Jacob Gardner · Adji Bousso Dieng · Hongseok Kim -
2022 Poster: Communication-efficient distributed eigenspace estimation with arbitrary node failures »
Vasileios Charisopoulos · Anil Damle -
2022 Poster: Bayesian Optimization over Discrete and Mixed Spaces via Probabilistic Reparameterization »
Samuel Daulton · Xingchen Wan · David Eriksson · Maximilian Balandat · Michael A Osborne · Eytan Bakshy -
2022 Poster: Local Latent Space Bayesian Optimization over Structured Inputs »
Natalie Maus · Haydn Jones · Juston Moore · Matt Kusner · John Bradshaw · Jacob Gardner -
2021 Poster: The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective »
Geoff Pleiss · John Cunningham -
2021 Poster: Rectangular Flows for Manifold Learning »
Anthony Caterini · Gabriel Loaiza-Ganem · Geoff Pleiss · John Cunningham -
2021 Poster: Scaling Gaussian Processes with Derivative Information Using Variational Inference »
Misha Padidar · Xinran Zhu · Leo Huang · Jacob Gardner · David Bindel -
2020 Poster: Entrywise convergence of iterative methods for eigenproblems »
Vasileios Charisopoulos · Austin Benson · Anil Damle -
2020 Poster: Identifying Mislabeled Data using the Area Under the Margin Ranking »
Geoff Pleiss · Tianyi Zhang · Ethan Elenberg · Kilian Weinberger -
2020 Poster: Efficient Nonmyopic Bayesian Optimization via One-Shot Multi-Step Trees »
Shali Jiang · Daniel Jiang · Maximilian Balandat · Brian Karrer · Jacob Gardner · Roman Garnett -
2019 : Lunch break & Poster session »
Breandan Considine · Michael Innes · Du Phan · Dougal Maclaurin · Robin Manhaeve · Alexey Radul · Shashi Gowda · Ekansh Sharma · Eli Sennesh · Maxim Kochurov · Gordon Plotkin · Thomas Wiecki · Navjot Kukreja · Chung-chieh Shan · Matthew Johnson · Dan Belov · Neeraj Pradhan · Wannes Meert · Angelika Kimmig · Luc De Raedt · Brian Patton · Matthew Hoffman · Rif A. Saurous · Daniel Roy · Eli Bingham · Martin Jankowiak · Colin Carroll · Junpeng Lao · Liam Paull · Martin Abadi · Angel Rojas Jimenez · JP Chen -
2019 Poster: Exact Gaussian Processes on a Million Data Points »
Ke Alexander Wang · Geoff Pleiss · Jacob Gardner · Stephen Tyree · Kilian Weinberger · Andrew Gordon Wilson -
2019 Poster: Variational Bayesian Optimal Experimental Design »
Adam Foster · Martin Jankowiak · Elias Bingham · Paul Horsfall · Yee Whye Teh · Thomas Rainforth · Noah Goodman -
2019 Spotlight: Variational Bayesian Optimal Experimental Design »
Adam Foster · Martin Jankowiak · Elias Bingham · Paul Horsfall · Yee Whye Teh · Thomas Rainforth · Noah Goodman -
2018 Poster: GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration »
Jacob Gardner · Geoff Pleiss · Kilian Weinberger · David Bindel · Andrew Wilson -
2018 Spotlight: GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration »
Jacob Gardner · Geoff Pleiss · Kilian Weinberger · David Bindel · Andrew Wilson -
2017 Poster: On Fairness and Calibration »
Geoff Pleiss · Manish Raghavan · Felix Wu · Jon Kleinberg · Kilian Weinberger