Timezone: »
Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. Although these models have a long history in statistics, their potential has only become widely appreciated in the machine learning community during the past decade. This tutorial will introduce GPs, their application to regression and classification, and outline recent computational developments. GPs are a natural framework for Bayesian inference about functions, providing full predictive distributions and a principled framework for inference, including model selection. The prior over functions is given in a hierarchical form, where the covariance function (or kernel) controls the properties of the functions in a way which allows interpretation of the model. Whereas inference in the simplest regression case can be done in closed form, inference in classification models is intractable. Several approximations have been proposed, e.g. the Expectation Propagation algorithm. A central limitation in the applicability of GPs to problems with large numbers of examples is that naïve implementations scale with the square and cube of the number of examples for memory and time respectively, making direct treatment of more than a few thousand cases inconvenient. Recent work on sparse approximations addresses these issues.
Author Information
Carl Edward Rasmussen (University of Cambridge)
More from the Same Authors
-
2022 : Gaussian Process parameterized Covariance Kernels for Non-stationary Regression »
Vidhi Lalchand · Talay Cheema · Laurence Aitchison · Carl Edward Rasmussen -
2022 Poster: Sparse Gaussian Process Hyperparameters: Optimize or Integrate? »
Vidhi Lalchand · Wessel Bruinsma · David Burt · Carl Edward Rasmussen -
2021 Poster: Kernel Identification Through Transformers »
Fergus Simpson · Ian Davies · Vidhi Lalchand · Alessandro Vullo · Nicolas Durrande · Carl Edward Rasmussen -
2021 Poster: Marginalised Gaussian Processes with Nested Sampling »
Fergus Simpson · Vidhi Lalchand · Carl Edward Rasmussen -
2020 : Combining variational autoencoder representations with structural descriptors improves prediction of docking scores »
Miguel Garcia-Ortegon · Carl Edward Rasmussen · Hiroshi Kajino -
2020 Poster: Ensembling geophysical models with Bayesian Neural Networks »
Ushnish Sengupta · Matt Amos · Scott Hosking · Carl Edward Rasmussen · Matthew Juniper · Paul Young -
2017 Poster: Convolutional Gaussian Processes »
Mark van der Wilk · Carl Edward Rasmussen · James Hensman -
2017 Oral: Convolutional Gaussian Processes »
Mark van der Wilk · Carl Edward Rasmussen · James Hensman -
2017 Poster: Data-Efficient Reinforcement Learning in Continuous State-Action Gaussian-POMDPs »
Rowan McAllister · Carl Edward Rasmussen -
2016 Poster: Understanding Probabilistic Sparse Gaussian Process Approximations »
Matthias Bauer · Mark van der Wilk · Carl Edward Rasmussen -
2014 Poster: Distributed Variational Inference in Sparse Gaussian Process Regression and Latent Variable Models »
Yarin Gal · Mark van der Wilk · Carl Edward Rasmussen -
2014 Poster: Variational Gaussian Process State-Space Models »
Roger Frigola · Yutian Chen · Carl Edward Rasmussen -
2013 Poster: Bayesian Inference and Learning in Gaussian Process State-Space Models with Particle MCMC »
Roger Frigola · Fredrik Lindsten · Thomas Schön · Carl Edward Rasmussen -
2012 Poster: Active Learning of Model Evidence Using Bayesian Quadrature »
Michael A Osborne · David Duvenaud · Roman Garnett · Carl Edward Rasmussen · Stephen J Roberts · Zoubin Ghahramani -
2011 Poster: Gaussian Process Training with Input Noise »
Andrew McHutchon · Carl Edward Rasmussen -
2011 Poster: Additive Gaussian Processes »
David Duvenaud · Hannes Nickisch · Carl Edward Rasmussen -
2009 Workshop: Probabilistic Approaches for Control and Robotics »
Marc Deisenroth · Hilbert J Kappen · Emo Todorov · Duy Nguyen-Tuong · Carl Edward Rasmussen · Jan Peters