Timezone: »
We propose a sparse method for scalable automated variational inference (AVI) in a large class of models with Gaussian process (GP) priors, multiple latent functions, multiple outputs and non-linear likelihoods. Our approach maintains the statistical efficiency property of the original AVI method, requiring only expectations over univariate Gaussian distributions to approximate the posterior with a mixture of Gaussians. Experiments on small datasets for various problems including regression, classification, Log Gaussian Cox processes, and warped GPs show that our method can perform as well as the full method under high levels of sparsity. On larger experiments using the MNIST and the SARCOS datasets we show that our method can provide superior performance to previously published scalable approaches that have been handcrafted to specific likelihood models.
Author Information
Amir Dezfouli (The University of New South Wales)
Edwin Bonilla (University of New South Wales)
More from the Same Authors
-
2023 Poster: The contextual lasso: Sparse linear models via deep neural networks »
Ryan Thompson · Amir Dezfouli · robert kohn -
2021 Poster: TacticZero: Learning to Prove Theorems from Scratch with Deep Reinforcement Learning »
Minchao Wu · Michael Norrish · Christian Walder · Amir Dezfouli -
2019 Poster: Disentangled behavioural representations »
Amir Dezfouli · Hassan Ashtiani · Omar Ghattas · Richard Nock · Peter Dayan · Cheng Soon Ong -
2018 Poster: Integrated accounts of behavioral and neuroimaging data using flexible recurrent neural network models »
Amir Dezfouli · Richard Morris · Fabio Ramos · Peter Dayan · Bernard Balleine -
2018 Oral: Integrated accounts of behavioral and neuroimaging data using flexible recurrent neural network models »
Amir Dezfouli · Richard Morris · Fabio Ramos · Peter Dayan · Bernard Balleine