Many probabilistic modeling tasks rely on solving challenging inference problems. These combinatorial problems arise, e.g., in predicting likely values for variables as in selecting and orienting residues in protein design, parsing in natural language processing, or when learning the model structure itself. In many cases, the inference problems involve densely connected variables (or higher order dependences) and are provably hard. However, recent research has shown that some of these difficulties can be overcome by a careful choice of approximation schemes and learning algorithms. These have yielded very encouraging results in wide array of fields, from machine vision and natural language processing to computational biology and signal processing. In this tutorial, we will focus on linear programming (LP) relaxations which have been particularly successful in solving inference problems. Intuitively, LP relaxations decompose a complex problem into a set of simpler subproblems that are subsequently encouraged to agree. If the subproblems agree about a solution, then the solution is the optimal one, otherwise it is fractional. Geometrically, the relaxation maintains an outer bound approximation to a polytope whose vertexes correspond to valid solutions. We will introduce and explain key ideas behind recent approaches, discuss when they can and cannot be applied, how they can be integrated into supervised learning schemes and what efficient message passing algorithms exist for solving them. We will also discuss how similar ideas can be used for calculating marginals. Examples from several applications will be provided, including computational biology, natural language processing (see also a separate tutorial on Dual Decomposition for NLP), and structure learning.
Author Information
Amir Globerson (Tel Aviv University, Google)
Amir Globerson is senior lecturer at the School of Engineering and Computer Science at the Hebrew University. He received a PhD in computational neuroscience from the Hebrew University, and was a Rothschild postdoctoral fellow at MIT. He joined the Hebrew University in 2008. His research interests include graphical models and probabilistic inference, convex optimization, robust learning and natural language processing.
Tommi Jaakkola (MIT)
Tommi Jaakkola is a professor of Electrical Engineering and Computer Science at MIT. He received an M.Sc. degree in theoretical physics from Helsinki University of Technology, and Ph.D. from MIT in computational neuroscience. Following a Sloan postdoctoral fellowship in computational molecular biology, he joined the MIT faculty in 1998. His research interests include statistical inference, graphical models, and large scale modern estimation problems with predominantly incomplete data.
More from the Same Authors

2019 Poster: Solving graph compression via optimal transport »
Vikas Garg · Tommi Jaakkola 
2019 Poster: Generative Models for GraphBased Protein Design »
John Ingraham · Vikas Garg · Regina Barzilay · Tommi Jaakkola 
2019 Poster: Direct Optimization through $\arg \max$ for Discrete Variational AutoEncoder »
Guy Lorberbom · Andreea Gane · Tommi Jaakkola · Tamir Hazan 
2019 Poster: Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers »
GuangHe Lee · Yang Yuan · Shiyu Chang · Tommi Jaakkola 
2019 Poster: A Game Theoretic Approach to Classwise Selective Rationalization »
Shiyu Chang · Yang Zhang · Mo Yu · Tommi Jaakkola 
2018 Poster: Mapping Images to Scene Graphs with PermutationInvariant Structured Prediction »
Roei Herzig · Moshiko Raboh · Gal Chechik · Jonathan Berant · Amir Globerson 
2018 Poster: Towards Robust Interpretability with SelfExplaining Neural Networks »
David Alvarez Melis · Tommi Jaakkola 
2017 Poster: Local Aggregative Games »
Vikas Garg · Tommi Jaakkola 
2017 Poster: Style Transfer from NonParallel Text by CrossAlignment »
Tianxiao Shen · Tao Lei · Regina Barzilay · Tommi Jaakkola 
2017 Spotlight: Style Transfer from Nonparallel Text by CrossAlignment »
Tianxiao Shen · Tao Lei · Regina Barzilay · Tommi Jaakkola 
2017 Poster: Predicting Organic Reaction Outcomes with WeisfeilerLehman Network »
Wengong Jin · Connor Coley · Regina Barzilay · Tommi Jaakkola 
2017 Poster: Robust Conditional Probabilities »
Yoav Wald · Amir Globerson 
2016 Poster: Optimal Tagging with Markov Chain Optimization »
Nir Rosenfeld · Amir Globerson 
2016 Poster: Learning Tree Structured Potential Games »
Vikas Garg · Tommi Jaakkola 
2015 Poster: From random walks to distances on unweighted graphs »
Tatsunori Hashimoto · Yi Sun · Tommi Jaakkola 
2015 Poster: Principal Differences Analysis: Interpretable Characterization of Differences between Distributions »
Jonas Mueller · Tommi Jaakkola 
2014 Poster: Controlling privacy in recommender systems »
Yu Xin · Tommi Jaakkola 
2013 Poster: Learning Efficient Random Maximum APosteriori Predictors with NonDecomposable Loss Functions »
Tamir Hazan · Subhransu Maji · Joseph Keshet · Tommi Jaakkola 
2013 Poster: On Sampling from the Gibbs Distribution with Random Maximum APosteriori Perturbations »
Tamir Hazan · Subhransu Maji · Tommi Jaakkola 
2012 Workshop: Machine Learning Approaches to Mobile Context Awareness »
Katherine Ellis · Gert Lanckriet · Tommi Jaakkola · Lenny Grokop 
2012 Poster: Convergence Rate Analysis of MAP Coordinate Minimization Algorithms »
Ofer Meshi · Tommi Jaakkola · Amir Globerson 
2011 Session: Spotlight Session 3 »
Amir Globerson 
2011 Session: Oral Session 3 »
Amir Globerson 
2010 Spotlight: More data means less inference: A pseudomax approach to structured learning »
David Sontag · Ofer Meshi · Tommi Jaakkola · Amir Globerson 
2010 Poster: More data means less inference: A pseudomax approach to structured learning »
David Sontag · Ofer Meshi · Tommi Jaakkola · Amir Globerson 
2009 Workshop: Approximate Learning of Large Scale Graphical Models »
Russ Salakhutdinov · Amir Globerson · David Sontag 
2009 Poster: An LP View of the Mbest MAP problem »
Menachem Fromer · Amir Globerson 
2009 Oral: An LP View of the MBest MAP Problem »
Menachem Fromer · Amir Globerson 
2008 Workshop: Approximate inference  how far have we come? »
Amir Globerson · David Sontag · Tommi Jaakkola 
2008 Poster: Clusters and Coarse Partitions in LP Relaxations »
David Sontag · Amir Globerson · Tommi Jaakkola 
2008 Spotlight: Clusters and Coarse Partitions in LP Relaxations »
David Sontag · Amir Globerson · Tommi Jaakkola 
2007 Poster: Convex Learning with Invariances »
Choon Hui Teo · Amir Globerson · Sam T Roweis · Alexander Smola 
2007 Oral: New Outer Bounds on the Marginal Polytope »
David Sontag · Tommi Jaakkola 
2007 Poster: New Outer Bounds on the Marginal Polytope »
David Sontag · Tommi Jaakkola 
2007 Spotlight: Convex Learning with Invariances »
Choon Hui Teo · Amir Globerson · Sam T Roweis · Alexander Smola 
2007 Poster: Fixing MaxProduct: Convergent Message Passing Algorithms for MAP LPRelaxations »
Amir Globerson · Tommi Jaakkola 
2006 Talk: Approximate inference using planar graph decomposition »
Amir Globerson · Tommi Jaakkola 
2006 Poster: Approximate inference using planar graph decomposition »
Amir Globerson · Tommi Jaakkola 
2006 Poster: Game Theoretic Algorithms for ProteinDNA binding »
Luis PerezBreva · Luis E Ortiz · ChenHsiang Yeang · Tommi Jaakkola 
2006 Spotlight: Game Theoretic Algorithms for ProteinDNA binding »
Luis PerezBreva · Luis E Ortiz · ChenHsiang Yeang · Tommi Jaakkola 
2006 Poster: Parameter Expanded Variational Bayesian Methods »
Yuan (Alan) Qi · Tommi Jaakkola