Timezone: »

Advances in Variational Inference
David Blei · Shakir Mohamed · Michael Jordan · Charles Blundell · Tamara Broderick · Matthew D. Hoffman

Sat Dec 05:30 AM -- 03:30 PM PST @ Level 5; room 510 a
Event URL: http://www.variationalinference.org »

The ever-increasing size of data sets has resulted in an immense effort in machine learning and statistics to develop more powerful and scalable probabilistic models. Efficient inference remains a challenge and limits the use of these models in large-scale scientific and industrial applications. Traditional unbiased inference schemes such as Markov chain Monte Carlo (MCMC) are often slow to run and difficult to evaluate in finite time. In contrast, variational inference allows for competitive run times and more reliable convergence diagnostics on large-scale and streaming data—while continuing to allow for complex, hierarchical modelling. This workshop aims to bring together researchers and practitioners addressing problems of scalable approximate inference to discuss recent advances in variational inference, and to debate the roadmap towards further improvements and wider adoption of variational methods.

The recent resurgence of interest in variational methods includes new methods for scalability using stochastic gradient methods, extensions to the streaming variational setting, improved local variational methods, inference in non-linear dynamical systems, principled regularisation in deep neural networks, and inference-based decision making in reinforcement learning, amongst others. Variational methods have clearly emerged as a preferred way to allow for tractable Bayesian inference. Despite this interest, there remain significant trade-offs in speed, accuracy, simplicity, applicability, and learned model complexity between variational inference and other approximative schemes such as MCMC and point estimation. In this workshop, we will discuss how to rigorously characterise these tradeoffs, as well as how they might be made more favourable. Moreover, we will address other issues of adoption in scientific communities that could benefit from the use of variational inference including, but not limited to, the development of relevant software packages.

Author Information

David Blei (Columbia University)
Shakir Mohamed (DeepMind)

Shakir Mohamed is a senior staff scientist at DeepMind in London. Shakir's main interests lie at the intersection of approximate Bayesian inference, deep learning and reinforcement learning, and the role that machine learning systems at this intersection have in the development of more intelligent and general-purpose learning systems. Before moving to London, Shakir held a Junior Research Fellowship from the Canadian Institute for Advanced Research (CIFAR), based in Vancouver at the University of British Columbia with Nando de Freitas. Shakir completed his PhD with Zoubin Ghahramani at the University of Cambridge, where he was a Commonwealth Scholar to the United Kingdom. Shakir is from South Africa and completed his previous degrees in Electrical and Information Engineering at the University of the Witwatersrand, Johannesburg.

Michael Jordan (UC Berkeley)
Charles Blundell (DeepMind)
Tamara Broderick (MIT)
Matthew D. Hoffman (Google)

More from the Same Authors