Timezone: »
One of the core problems of modern statistics and machine learning is to approximate difficulttocompute probability distributions. This problem is especially important in probabilistic modeling, which frames all inference about unknown quantities as a calculation about a conditional distribution. In this tutorial we review and discuss variational inference (VI), a method a that approximates probability distributions through optimization. VI has been used in myriad applications in machine learning and tends to be faster than more traditional methods, such as Markov chain Monte Carlo sampling. Brought into machine learning in the 1990s, recent advances and easier implementation have renewed interest and application of this class of methods. This tutorial aims to provide both an introduction to VI with a modern view of the field, and an overview of the role that probabilistic inference plays in many of the central areas of machine learning.
The tutorial has three parts. First, we provide a broad review of variational inference from several perspectives. This part serves as an introduction (or review) of its central concepts. Second, we develop and connect some of the pivotal tools for VI that have been developed in the last few years, tools like Monte Carlo gradient estimation, black box variational inference, stochastic approximation, and variational autoencoders. These methods have lead to a resurgence of research and applications of VI. Finally, we discuss some of the unsolved problems in VI and point to promising research directions.
Learning objectives;
 Gain a wellgrounded understanding of modern advances in variational inference.
 Understand how to implement basic versions for a wide class of models.
 Understand connections and different names used in other related research areas.
 Understand important problems in variational inference research.
Target audience;
 Machine learning researchers across all level of experience from first year grad students to other more experienced researchers
 Targeted at those who want to understand recent advances in variational inference
 Basic understanding of probability is sufficient
Author Information
David Blei (Columbia University)
David Blei is a Professor of Statistics and Computer Science at Columbia University, and a member of the Columbia Data Science Institute. His research is in statistical machine learning, involving probabilistic topic models, Bayesian nonparametric methods, and approximate posterior inference algorithms for massive data. He works on a variety of applications, including text, images, music, social networks, user behavior, and scientific data. David has received several awards for his research, including a Sloan Fellowship (2010), Office of Naval Research Young Investigator Award (2011), Presidential Early Career Award for Scientists and Engineers (2011), Blavatnik Faculty Award (2013), and ACMInfosys Foundation Award (2013). He is a fellow of the ACM.
Shakir Mohamed (DeepMind)
Shakir Mohamed is a senior staff scientist at DeepMind in London. Shakir's main interests lie at the intersection of approximate Bayesian inference, deep learning and reinforcement learning, and the role that machine learning systems at this intersection have in the development of more intelligent and generalpurpose learning systems. Before moving to London, Shakir held a Junior Research Fellowship from the Canadian Institute for Advanced Research (CIFAR), based in Vancouver at the University of British Columbia with Nando de Freitas. Shakir completed his PhD with Zoubin Ghahramani at the University of Cambridge, where he was a Commonwealth Scholar to the United Kingdom. Shakir is from South Africa and completed his previous degrees in Electrical and Information Engineering at the University of the Witwatersrand, Johannesburg.
Rajesh Ranganath (Princeton University)
Rajesh Ranganath is a PhD candidate in computer science at Princeton University. His research interests include approximate inference, model checking, Bayesian nonparametrics, and machine learning for healthcare. Rajesh has made several advances in variational methods, especially in popularising blackbox variational inference methods that automate the process of inference by making variational inference easier to use while providing more scalable, and accurate posterior approximations. Rajesh works in SLAP group with David Blei. Before starting his PhD, Rajesh worked as a software engineer for AMA Capital Management. He obtained his BS and MS from Stanford University with Andrew Ng and Dan Jurafsky. Rajesh has won several awards and fellowships including the NDSEG graduate fellowship and the Porter Ogden Jacobus Fellowship, given to the top four doctoral students at Princeton University.
More from the Same Authors

2020 Workshop: I Can’t Believe It’s Not Better! Bridging the gap between theory and empiricism in probabilistic machine learning »
Jessica Forde · Francisco Ruiz · Melanie Fernandez Pradier · Aaron Schein · Finale DoshiVelez · Isabel Valera · David Blei · Hanna Wallach 
2019 Poster: Training Language GANs from Scratch »
Cyprien de Masson d'Autume · Shakir Mohamed · Mihaela Rosca · Jack Rae 
2019 Poster: PoissonRandomized Gamma Dynamical Systems »
Aaron Schein · Scott Linderman · Mingyuan Zhou · David Blei · Hanna Wallach 
2019 Poster: Variational Bayes under Model Misspecification »
Yixin Wang · David Blei 
2019 Poster: Using Embeddings to Correct for Unobserved Confounding in Networks »
Victor Veitch · Yixin Wang · David Blei 
2019 Poster: Adapting Neural Networks for the Estimation of Treatment Effects »
Claudia Shi · David Blei · Victor Veitch 
2018 Poster: Implicit Reparameterization Gradients »
Mikhail Figurnov · Shakir Mohamed · Andriy Mnih 
2018 Spotlight: Implicit Reparameterization Gradients »
Mikhail Figurnov · Shakir Mohamed · Andriy Mnih 
2017 Workshop: Advances in Approximate Bayesian Inference »
Francisco Ruiz · Stephan Mandt · Cheng Zhang · James McInerney · James McInerney · Dustin Tran · Dustin Tran · David Blei · Max Welling · Tamara Broderick · Michalis Titsias 
2017 Workshop: Machine Learning for Health (ML4H)  What Parts of Healthcare are Ripe for Disruption by Machine Learning Right Now? »
Jason Fries · Alex Wiltschko · Andrew Beam · Isaac S Kohane · Jasper Snoek · Peter Schulam · Madalina Fiterau · David Kale · Rajesh Ranganath · Bruno Jedynak · Michael Hughes · Tristan Naumann · Natalia Antropova · Adrian Dalca · SHUBHI ASTHANA · Prateek Tandon · Jaz Kandola · Uri Shalit · Marzyeh Ghassemi · Tim Althoff · Alexander Ratner · Jumana Dakka 
2017 Poster: Hierarchical Implicit Models and LikelihoodFree Variational Inference »
Dustin Tran · Rajesh Ranganath · David Blei 
2017 Poster: Structured Embedding Models for Grouped Data »
Maja Rudolph · Francisco Ruiz · Susan Athey · David Blei 
2017 Poster: Variational Inference via $\chi$ Upper Bound Minimization »
Adji Bousso Dieng · Dustin Tran · Rajesh Ranganath · John Paisley · David Blei 
2017 Poster: Context Selection for Embedding Models »
Liping Liu · Francisco Ruiz · Susan Athey · David Blei 
2016 Workshop: Machine Learning for Health »
Uri Shalit · Marzyeh Ghassemi · Jason Fries · Rajesh Ranganath · Theofanis Karaletsos · David Kale · Peter Schulam · Madalina Fiterau 
2016 Workshop: Advances in Approximate Bayesian Inference »
Tamara Broderick · Stephan Mandt · James McInerney · Dustin Tran · David Blei · Kevin Murphy · Andrew Gelman · Michael I Jordan 
2016 Poster: Unsupervised Learning of 3D Structure from Images »
Danilo Jimenez Rezende · S. M. Ali Eslami · Shakir Mohamed · Peter Battaglia · Max Jaderberg · Nicolas Heess 
2016 Poster: Operator Variational Inference »
Rajesh Ranganath · Dustin Tran · Jaan Altosaar · David Blei 
2016 Poster: The Generalized Reparameterization Gradient »
Francisco Ruiz · Michalis Titsias · David Blei 
2016 Poster: Exponential Family Embeddings »
Maja Rudolph · Francisco Ruiz · Stephan Mandt · David Blei 
2015 Workshop: Advances in Approximate Bayesian Inference »
Dustin Tran · Tamara Broderick · Stephan Mandt · James McInerney · Shakir Mohamed · Alp Kucukelbir · Matthew D. Hoffman · Neil Lawrence · David Blei 
2015 Workshop: Machine Learning For Healthcare (MLHC) »
Theofanis Karaletsos · Rajesh Ranganath · Suchi Saria · David Sontag 
2015 Poster: The Population Posterior and Bayesian Modeling on Streams »
James McInerney · Rajesh Ranganath · David Blei 
2015 Poster: Automatic Variational Inference in Stan »
Alp Kucukelbir · Rajesh Ranganath · Andrew Gelman · David Blei 
2015 Spotlight: Automatic Variational Inference in Stan »
Alp Kucukelbir · Rajesh Ranganath · Andrew Gelman · David Blei 
2015 Poster: Variational Information Maximisation for Intrinsically Motivated Reinforcement Learning »
Shakir Mohamed · Danilo Jimenez Rezende 
2015 Poster: Copula variational inference »
Dustin Tran · David Blei · Edo M Airoldi 
2014 Workshop: Advances in Variational Inference »
David Blei · Shakir Mohamed · Michael Jordan · Charles Blundell · Tamara Broderick · Matthew D. Hoffman 
2014 Poster: Semisupervised Learning with Deep Generative Models »
Durk Kingma · Shakir Mohamed · Danilo Jimenez Rezende · Max Welling 
2014 Spotlight: Semisupervised Learning with Deep Generative Models »
Durk Kingma · Shakir Mohamed · Danilo Jimenez Rezende · Max Welling 
2012 Workshop: Bayesian Optimization and Decision Making »
Javad Azimi · Roman Garnett · Frank R Hutter · Shakir Mohamed 
2012 Poster: Expectation Propagation in Gaussian Process Dynamical Systems »
Marc Deisenroth · Shakir Mohamed 
2012 Poster: Fast Bayesian Inference for NonConjugate Gaussian Process Regression »
Mohammad Emtiyaz Khan · Shakir Mohamed · Kevin P Murphy 
2009 Poster: Large Scale Nonparametric Bayesian Inference: Data Parallelisation in the Indian Buffet Process »
Shakir Mohamed · David A Knowles · Zoubin Ghahramani · Finale P DoshiVelez 
2008 Poster: Bayesian Exponential Family PCA »
Shakir Mohamed · Katherine Heller · Zoubin Ghahramani 
2008 Spotlight: Bayesian Exponential Family PCA »
Shakir Mohamed · Katherine Heller · Zoubin Ghahramani