Timezone: »

Exact Gaussian Processes on a Million Data Points
Ke Alexander Wang · Geoff Pleiss · Jacob Gardner · Stephen Tyree · Kilian Weinberger · Andrew Gordon Wilson

Wed Dec 11 05:00 PM -- 07:00 PM (PST) @ East Exhibition Hall B + C #169
Gaussian processes (GPs) are flexible non-parametric models, with a capacity that grows with the available data. However, computational constraints with standard inference procedures have limited exact GPs to problems with fewer than about ten thousand training points, necessitating approximations for larger datasets. In this paper, we develop a scalable approach for exact GPs that leverages multi-GPU parallelization and methods like linear conjugate gradients, accessing the kernel matrix only through matrix multiplication. By partitioning and distributing kernel matrix multiplies, we demonstrate that an exact GP can be trained on over a million points, a task previously thought to be impossible with current computing hardware, in less than 2 hours. Moreover, our approach is generally applicable, without constraints to grid data or specific kernel classes. Enabled by this scalability, we perform the first-ever comparison of exact GPs against scalable GP approximations on datasets with $10^4 \!-\! 10^6$ data points, showing dramatic performance improvements.

Author Information

Ke Alexander Wang (Cornell University)
Geoff Pleiss (Cornell University)
Jacob Gardner (Uber AI Labs)
Stephen Tyree (NVIDIA)
Kilian Weinberger (Cornell University / ASAPP Research)
Andrew Gordon Wilson (New York University)

More from the Same Authors