Timezone: »
Sparse variational Gaussian process (SVGP) methods are a common choice for non-conjugate Gaussian process inference because of their computational benefits. In this paper, we improve their computational efficiency by using a dual parameterization where each data example is assigned dual parameters, similarly to site parameters used in expectation propagation. Our dual parameterization speeds-up inference using natural gradient descent, and provides a tighter evidence lower bound for hyperparameter learning. The approach has the same memory cost as the current SVGP methods, but it is faster and more accurate.
Author Information
Vincent ADAM (UCL)
PhD in computational neuroscience and machine learning at the Gatsby Unit
Paul Chang (Aalto University)
A machine learning researcher working in the Arno Solin group at Aalto University. Looking at probabilistic modelling specifically Gaussian Processes and methods to speed up inference.
Mohammad Emtiyaz Khan (RIKEN)
Emtiyaz Khan (also known as Emti) is a team leader at the RIKEN center for Advanced Intelligence Project (AIP) in Tokyo where he leads the Approximate Bayesian Inference Team. He is also a visiting professor at the Tokyo University of Agriculture and Technology (TUAT). Previously, he was a postdoc and then a scientist at Ecole Polytechnique Fédérale de Lausanne (EPFL), where he also taught two large machine learning courses and received a teaching award. He finished his PhD in machine learning from University of British Columbia in 2012. The main goal of Emti’s research is to understand the principles of learning from data and use them to develop algorithms that can learn like living beings. For the past 10 years, his work has focused on developing Bayesian methods that could lead to such fundamental principles. The approximate Bayesian inference team now continues to use these principles, as well as derive new ones, to solve real-world problems.
Arno Solin (Aalto University)
More from the Same Authors
-
2021 : Beyond Target Networks: Improving Deep $Q$-learning with Functional Regularization »
Alexandre Piche · Joseph Marino · Gian Maria Marconi · Valentin Thomas · Chris Pal · Mohammad Emtiyaz Khan -
2022 : Fantasizing with Dual GPs in Bayesian Optimization and Active Learning »
Paul Chang · Prakhar Verma · ST John · Victor Picheny · Henry Moss · Arno Solin -
2022 : Towards Improved Learning in Gaussian Processes: The Best of Two Worlds »
Rui Li · ST John · Arno Solin -
2022 : Can Calibration Improve Sample Prioritization? »
Ganesh Tata · Gautham Krishna Gudur · Gopinath Chennupati · Mohammad Emtiyaz Khan -
2022 : Practical Structured Riemannian Optimization with Momentum by using Generalized Normal Coordinates »
Wu Lin · Valentin Duruisseaux · Melvin Leok · Frank Nielsen · Mohammad Emtiyaz Khan · Mark Schmidt -
2022 : Invited Keynote 2 »
Mohammad Emtiyaz Khan · Mohammad Emtiyaz Khan -
2021 : Sparse Gaussian Processes for Stochastic Differential Equations »
Prakhar Verma · Vincent ADAM · Arno Solin -
2021 Poster: Periodic Activation Functions Induce Stationarity »
Lassi Meronen · Martin Trapp · Arno Solin -
2021 Poster: Knowledge-Adaptation Priors »
Mohammad Emtiyaz Khan · Siddharth Swaroop -
2021 Poster: Spatio-Temporal Variational Gaussian Processes »
Oliver Hamelijnck · William Wilkinson · Niki Loppi · Arno Solin · Theodoros Damoulas -
2021 Poster: Scalable Inference in SDEs by Direct Matching of the Fokker–Planck–Kolmogorov Equation »
Arno Solin · Ella Tamir · Prakhar Verma -
2020 Poster: Stationary Activations for Uncertainty Calibration in Deep Learning »
Lassi Meronen · Christabella Irwanto · Arno Solin -
2020 Poster: Deep Automodulators »
Ari Heljakka · Yuxin Hou · Juho Kannala · Arno Solin -
2019 Poster: Approximate Inference Turns Deep Networks into Gaussian Processes »
Mohammad Emtiyaz Khan · Alexander Immer · Ehsan Abedi · Maciej Korzepa -
2019 Poster: Practical Deep Learning with Bayesian Principles »
Kazuki Osawa · Siddharth Swaroop · Mohammad Emtiyaz Khan · Anirudh Jain · Runa Eschenhagen · Richard Turner · Rio Yokota -
2019 Tutorial: Deep Learning with Bayesian Principles »
Mohammad Emtiyaz Khan -
2018 Poster: Infinite-Horizon Gaussian Processes »
Arno Solin · James Hensman · Richard Turner -
2017 : Poster Session »
Shunsuke Horii · Heejin Jeong · Tobias Schwedes · Qing He · Ben Calderhead · Ertunc Erdil · Jaan Altosaar · Patrick Muchmore · Rajiv Khanna · Ian Gemp · Pengfei Zhang · Yuan Zhou · Chris Cremer · Maria DeYoreo · Alexander Terenin · Brendan McVeigh · Rachit Singh · Yaodong Yang · Erik Bodin · Trefor Evans · Henry Chai · Shandian Zhe · Jeffrey Ling · Vincent ADAM · Lars Maaløe · Andrew Miller · Ari Pakman · Josip Djolonga · Hong Ge -
2015 Poster: Kullback-Leibler Proximal Variational Inference »
Mohammad Emtiyaz Khan · Pierre Baque · François Fleuret · Pascal Fua -
2014 Poster: Decoupled Variational Gaussian Inference »
Mohammad Emtiyaz Khan -
2012 Poster: Fast Bayesian Inference for Non-Conjugate Gaussian Process Regression »
Mohammad Emtiyaz Khan · Shakir Mohamed · Kevin Murphy -
2010 Poster: Variational bounds for mixed-data factor analysis »
Mohammad Emtiyaz Khan · Benjamin Marlin · Guillaume Bouchard · Kevin Murphy -
2009 Oral: Accelerating Bayesian Structural Inference for Non-Decomposable Gaussian Graphical Models »
Baback Moghaddam · Benjamin Marlin · Mohammad Emtiyaz Khan · Kevin Murphy -
2009 Poster: Accelerating Bayesian Structural Inference for Non-Decomposable Gaussian Graphical Models »
Baback Moghaddam · Benjamin Marlin · Mohammad Emtiyaz Khan · Kevin Murphy