Graphical models have become a key tool in representing multivariatedistributions in many machine learning applications. They have beensuccessfully used in diverse fields such as machinevision,bioinformatics, natural language processing, reinforcement learningand many others. Approximate inference in such models has attracted a great deal ofinterest in the learning community, and many algorithms have beenintroduced in recent years, with a specific emphasis on inference indiscrete variable models. These new methods explore new and excitinglinks between inference, combinatorial optimization, and convexduality. They provide new avenues for designing and understandingmessage passing algorithms, and can give theoretical guarantees whenused for learning graphical models. The goal of this workshop is to assess the current state of the fieldand explore new directions. We shall specifically be interested inunderstanding the following issues: 1. State of the field: What are the existing methods, and how do theyrelate to each other? Which problems can be solved using existingalgorithms, and which cannot? 2. Quality of approximations: What are the theoretical guaranteesregarding the output of the approximate inference algorithms (e.g.,upper or lower bounds on MAP or marginals, optimality within a givenfactor, certificates of optimality etc.). How do these depend on thecomplexity of the inference algorithms (i.e., what is the tradeoffbetween running time and accuracy)?3. Efficiency issues: What type of convergence guarantees do differentmessage passing algorithms have, and what can be proven about theirrunning time? Do certain methods dominate others in terms of theseguarantees? When are message passing algorithms better than other ""outofthebox"" convex optimization tools? 4. Scalability and applicability to real problems: How well do currentmethods scale to largescale problems (e.g. in machinevision andbioinformatics)? How hard is inference in typical realworldproblems? Although inference is generally NP hard, this does not implythat a specific real problem cannot be solved exactly. The relativesuccess of approximate inference methods on some realworld problemssuggests that we are working in a regime of problems that are amenableto approximation. Can we characterize it? 5. Connections across fields: Approximate inference is closely linkedto problems in combinatorial optimization (e.g., maximization offunctions over graphs, or counting problems), and in convexoptimization (e.g., dual ascent methods). What techniques can we""import"" from these fields, and what from our advances in approximateinference can we ""export"" back? 6. Inference and learning: Learning the parameters of a graphicalmodel often requires inference as a subroutine. How should approximateinference be embedded into learning? Once a model is learned, whatinference method should be used, and how should it relate to the oneused during learning? What efficient algorithms exist for jointlearning and inference? 7. Continuous models: Many new approximate inference approaches havebeen developed in the context of discrete variable models. How canthese be applied to continuous valued or hybrid models?
Author Information
Amir Globerson (Tel Aviv University, Google)
Amir Globerson is senior lecturer at the School of Engineering and Computer Science at the Hebrew University. He received a PhD in computational neuroscience from the Hebrew University, and was a Rothschild postdoctoral fellow at MIT. He joined the Hebrew University in 2008. His research interests include graphical models and probabilistic inference, convex optimization, robust learning and natural language processing.
David Sontag (MIT)
Tommi Jaakkola (MIT)
Tommi Jaakkola is a professor of Electrical Engineering and Computer Science at MIT. He received an M.Sc. degree in theoretical physics from Helsinki University of Technology, and Ph.D. from MIT in computational neuroscience. Following a Sloan postdoctoral fellowship in computational molecular biology, he joined the MIT faculty in 1998. His research interests include statistical inference, graphical models, and large scale modern estimation problems with predominantly incomplete data.
More from the Same Authors

2019 Poster: Solving graph compression via optimal transport »
Vikas Garg · Tommi Jaakkola 
2019 Poster: Generative Models for GraphBased Protein Design »
John Ingraham · Vikas Garg · Regina Barzilay · Tommi Jaakkola 
2019 Poster: Direct Optimization through $\arg \max$ for Discrete Variational AutoEncoder »
Guy Lorberbom · Andreea Gane · Tommi Jaakkola · Tamir Hazan 
2019 Poster: Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers »
GuangHe Lee · Yang Yuan · Shiyu Chang · Tommi Jaakkola 
2019 Poster: A Game Theoretic Approach to Classwise Selective Rationalization »
Shiyu Chang · Yang Zhang · Mo Yu · Tommi Jaakkola 
2018 Poster: Why Is My Classifier Discriminatory? »
Irene Chen · Fredrik Johansson · David Sontag 
2018 Poster: Mapping Images to Scene Graphs with PermutationInvariant Structured Prediction »
Roei Herzig · Moshiko Raboh · Gal Chechik · Jonathan Berant · Amir Globerson 
2018 Spotlight: Why Is My Classifier Discriminatory? »
Irene Chen · Fredrik Johansson · David Sontag 
2018 Poster: Towards Robust Interpretability with SelfExplaining Neural Networks »
David Alvarez Melis · Tommi Jaakkola 
2017 Poster: Local Aggregative Games »
Vikas Garg · Tommi Jaakkola 
2017 Poster: Causal Effect Inference with Deep LatentVariable Models »
Christos Louizos · Uri Shalit · Joris M Mooij · David Sontag · Richard Zemel · Max Welling 
2017 Poster: Style Transfer from NonParallel Text by CrossAlignment »
Tianxiao Shen · Tao Lei · Regina Barzilay · Tommi Jaakkola 
2017 Spotlight: Style Transfer from Nonparallel Text by CrossAlignment »
Tianxiao Shen · Tao Lei · Regina Barzilay · Tommi Jaakkola 
2017 Poster: Predicting Organic Reaction Outcomes with WeisfeilerLehman Network »
Wengong Jin · Connor Coley · Regina Barzilay · Tommi Jaakkola 
2017 Poster: Robust Conditional Probabilities »
Yoav Wald · Amir Globerson 
2016 Poster: Optimal Tagging with Markov Chain Optimization »
Nir Rosenfeld · Amir Globerson 
2016 Poster: Learning Tree Structured Potential Games »
Vikas Garg · Tommi Jaakkola 
2015 Workshop: Machine Learning For Healthcare (MLHC) »
Theofanis Karaletsos · Rajesh Ranganath · Suchi Saria · David Sontag 
2015 Poster: From random walks to distances on unweighted graphs »
Tatsunori Hashimoto · Yi Sun · Tommi Jaakkola 
2015 Poster: Barrier FrankWolfe for Marginal Inference »
Rahul G Krishnan · Simon LacosteJulien · David Sontag 
2015 Poster: Principal Differences Analysis: Interpretable Characterization of Differences between Distributions »
Jonas Mueller · Tommi Jaakkola 
2014 Poster: Controlling privacy in recommender systems »
Yu Xin · Tommi Jaakkola 
2013 Poster: Learning Efficient Random Maximum APosteriori Predictors with NonDecomposable Loss Functions »
Tamir Hazan · Subhransu Maji · Joseph Keshet · Tommi Jaakkola 
2013 Poster: Discovering Hidden Variables in NoisyOr Networks using Quartet Tests »
Yacine Jernite · Yoni Halpern · David Sontag 
2013 Poster: On Sampling from the Gibbs Distribution with Random Maximum APosteriori Perturbations »
Tamir Hazan · Subhransu Maji · Tommi Jaakkola 
2012 Workshop: Machine Learning Approaches to Mobile Context Awareness »
Katherine Ellis · Gert Lanckriet · Tommi Jaakkola · Lenny Grokop 
2012 Poster: Convergence Rate Analysis of MAP Coordinate Minimization Algorithms »
Ofer Meshi · Tommi Jaakkola · Amir Globerson 
2011 Poster: Complexity of Inference in Latent Dirichlet Allocation »
David Sontag · Daniel Roy 
2011 Spotlight: Complexity of Inference in Latent Dirichlet Allocation »
David Sontag · Daniel Roy 
2011 Session: Spotlight Session 3 »
Amir Globerson 
2011 Session: Oral Session 3 »
Amir Globerson 
2011 Tutorial: Linear Programming Relaxations for Graphical Models »
Amir Globerson · Tommi Jaakkola 
2010 Spotlight: More data means less inference: A pseudomax approach to structured learning »
David Sontag · Ofer Meshi · Tommi Jaakkola · Amir Globerson 
2010 Poster: More data means less inference: A pseudomax approach to structured learning »
David Sontag · Ofer Meshi · Tommi Jaakkola · Amir Globerson 
2009 Workshop: Approximate Learning of Large Scale Graphical Models »
Russ Salakhutdinov · Amir Globerson · David Sontag 
2009 Poster: An LP View of the Mbest MAP problem »
Menachem Fromer · Amir Globerson 
2009 Oral: An LP View of the MBest MAP Problem »
Menachem Fromer · Amir Globerson 
2008 Poster: Clusters and Coarse Partitions in LP Relaxations »
David Sontag · Amir Globerson · Tommi Jaakkola 
2008 Spotlight: Clusters and Coarse Partitions in LP Relaxations »
David Sontag · Amir Globerson · Tommi Jaakkola 
2007 Poster: Convex Learning with Invariances »
Choon Hui Teo · Amir Globerson · Sam T Roweis · Alexander Smola 
2007 Oral: New Outer Bounds on the Marginal Polytope »
David Sontag · Tommi Jaakkola 
2007 Poster: New Outer Bounds on the Marginal Polytope »
David Sontag · Tommi Jaakkola 
2007 Spotlight: Convex Learning with Invariances »
Choon Hui Teo · Amir Globerson · Sam T Roweis · Alexander Smola 
2007 Poster: Fixing MaxProduct: Convergent Message Passing Algorithms for MAP LPRelaxations »
Amir Globerson · Tommi Jaakkola 
2006 Talk: Approximate inference using planar graph decomposition »
Amir Globerson · Tommi Jaakkola 
2006 Poster: Approximate inference using planar graph decomposition »
Amir Globerson · Tommi Jaakkola 
2006 Poster: Game Theoretic Algorithms for ProteinDNA binding »
Luis PerezBreva · Luis E Ortiz · ChenHsiang Yeang · Tommi Jaakkola 
2006 Spotlight: Game Theoretic Algorithms for ProteinDNA binding »
Luis PerezBreva · Luis E Ortiz · ChenHsiang Yeang · Tommi Jaakkola 
2006 Poster: Parameter Expanded Variational Bayesian Methods »
Yuan (Alan) Qi · Tommi Jaakkola