Timezone: »

Advances in Approximate Bayesian Inference
Dustin Tran · Tamara Broderick · Stephan Mandt · James McInerney · Shakir Mohamed · Alp Kucukelbir · Matthew D. Hoffman · Neil Lawrence · David Blei

Fri Dec 11 05:30 AM -- 03:30 PM (PST) @ 513 ab
Event URL: http://approximateinference.org »

The ever-increasing size of data sets has resulted in an immense effort in Bayesian statistics to develop more expressive and scalable probabilistic models. Inference remains a challenge and limits the use of these models in large-scale scientific and industrial applications. Asymptotically exact schemes such as Markov chain Monte Carlo (MCMC) are often slow to run and difficult to evaluate in finite time. Thus we must resort to approximate inference, which allows for more efficient run times and more reliable convergence diagnostics on large-scale and streaming data—without compromising on the complexity of these models. This workshop aims to bring together researchers and practitioners in order to discuss recent advances in approximate inference; we also aim to discuss the methodological and foundational issues in such techniques in order to consider future improvements.

The resurgence of interest in approximate inference has furthered development in many techniques: for example, scalability, variance reduction, and preserving dependency in variational inference; divide and conquer techniques in expectation propagation; dimensionality reduction using random projections; and stochastic variants of Laplace approximation-based methods. Approximate inference techniques have clearly emerged as the preferred way to perform tractable Bayesian inference. Despite this interest, there remain significant trade-offs in speed, accuracy, generalizability, and learned model complexity. In this workshop, we will discuss how to rigorously characterize these tradeoffs, as well as how they might be made more favourable. Moreover, we will address the issues of its adoption in scientific communities which could benefit from advice on their practical usage and the development of relevant software packages.

Author Information

Dustin Tran (Columbia University)
Tamara Broderick (MIT)
Stephan Mandt (Columbia University)
James McInerney (Columbia)
Shakir Mohamed (Google DeepMind)
Shakir Mohamed

Shakir Mohamed is a senior staff scientist at DeepMind in London. Shakir's main interests lie at the intersection of approximate Bayesian inference, deep learning and reinforcement learning, and the role that machine learning systems at this intersection have in the development of more intelligent and general-purpose learning systems. Before moving to London, Shakir held a Junior Research Fellowship from the Canadian Institute for Advanced Research (CIFAR), based in Vancouver at the University of British Columbia with Nando de Freitas. Shakir completed his PhD with Zoubin Ghahramani at the University of Cambridge, where he was a Commonwealth Scholar to the United Kingdom. Shakir is from South Africa and completed his previous degrees in Electrical and Information Engineering at the University of the Witwatersrand, Johannesburg.

Alp Kucukelbir (Fero Labs / Columbia University)
Matthew D. Hoffman (Adobe Research)
Neil Lawrence (University of Cambridge)
David Blei (Columbia University)

David Blei is a Professor of Statistics and Computer Science at Columbia University, and a member of the Columbia Data Science Institute. His research is in statistical machine learning, involving probabilistic topic models, Bayesian nonparametric methods, and approximate posterior inference algorithms for massive data. He works on a variety of applications, including text, images, music, social networks, user behavior, and scientific data. David has received several awards for his research, including a Sloan Fellowship (2010), Office of Naval Research Young Investigator Award (2011), Presidential Early Career Award for Scientists and Engineers (2011), Blavatnik Faculty Award (2013), and ACM-Infosys Foundation Award (2013). He is a fellow of the ACM.

More from the Same Authors