Timezone: »

 
Poster
Operator Variational Inference
Rajesh Ranganath · Dustin Tran · Jaan Altosaar · David Blei

Wed Dec 07 09:00 AM -- 12:30 PM (PST) @ Area 5+6+7+8 #122 #None

Variational inference is an umbrella term for algorithms which cast Bayesian inference as optimization. Classically, variational inference uses the Kullback-Leibler divergence to define the optimization. Though this divergence has been widely used, the resultant posterior approximation can suffer from undesirable statistical properties. To address this, we reexamine variational inference from its roots as an optimization problem. We use operators, or functions of functions, to design variational objectives. As one example, we design a variational objective with a Langevin-Stein operator. We develop a black box algorithm, operator variational inference (OPVI), for optimizing any operator objective. Importantly, operators enable us to make explicit the statistical and computational tradeoffs for variational inference. We can characterize different properties of variational objectives, such as objectives that admit data subsampling---allowing inference to scale to massive data---as well as objectives that admit variational programs---a rich class of posterior approximations that does not require a tractable density. We illustrate the benefits of OPVI on a mixture model and a generative model of images.

Author Information

Rajesh Ranganath (Princeton University)

Rajesh Ranganath is a PhD candidate in computer science at Princeton University. His research interests include approximate inference, model checking, Bayesian nonparametrics, and machine learning for healthcare. Rajesh has made several advances in variational methods, especially in popularising black-box variational inference methods that automate the process of inference by making variational inference easier to use while providing more scalable, and accurate posterior approximations. Rajesh works in SLAP group with David Blei. Before starting his PhD, Rajesh worked as a software engineer for AMA Capital Management. He obtained his BS and MS from Stanford University with Andrew Ng and Dan Jurafsky. Rajesh has won several awards and fellowships including the NDSEG graduate fellowship and the Porter Ogden Jacobus Fellowship, given to the top four doctoral students at Princeton University.

Dustin Tran (Columbia University)
Jaan Altosaar (Princeton University)
David Blei (Columbia University)

David Blei is a Professor of Statistics and Computer Science at Columbia University, and a member of the Columbia Data Science Institute. His research is in statistical machine learning, involving probabilistic topic models, Bayesian nonparametric methods, and approximate posterior inference algorithms for massive data. He works on a variety of applications, including text, images, music, social networks, user behavior, and scientific data. David has received several awards for his research, including a Sloan Fellowship (2010), Office of Naval Research Young Investigator Award (2011), Presidential Early Career Award for Scientists and Engineers (2011), Blavatnik Faculty Award (2013), and ACM-Infosys Foundation Award (2013). He is a fellow of the ACM.

More from the Same Authors