Timezone: »
The expectation maximization (EM) algorithm is a widely used maximum likelihood estimation procedure for statistical models when the values of some of the variables in the model are hidden. Very often, however, our aim is primarily to find a model that assigns values to the latent variables that have intended meaning for our data and maximizing expected likelihood only sometimes accomplishes this. Unfortunately, it is often very indirect or difficult to add even simple a-priori information about latent variables without making models overly complex or intractable. In this paper, we present an efficient, principled way to inject constraints on the posteriors of latent variables into the EM algorithm. Our method can be viewed as a regularization of the posteriors of hidden variables, or alternatively as a restriction on the types of lower bounds used for maximizing data likelihood. Focusing on the alignment problem for statistical machine translation, we show that simple, intuitive posterior constraints can greatly improve the performance over standard baselines and be competitive with more complex, intractable models.
Author Information
Kuzman Ganchev (University of Pennsylvania)
Joao V Graca (L2F INESC-ID Lisboa)
Ben Taskar (University of Washington)
Related Events (a corresponding poster, oral, or spotlight)
-
2007 Poster: Expectation Maximization, Posterior Constraints, and Statistical Alignment »
Wed. Dec 5th 06:30 -- 06:40 PM Room
More from the Same Authors
-
2014 Poster: Expectation-Maximization for Learning Determinantal Point Processes »
Jennifer A Gillenwater · Alex Kulesza · Emily Fox · Ben Taskar -
2013 Poster: Learning Adaptive Value of Information for Structured Prediction »
David J Weiss · Ben Taskar -
2013 Poster: Approximate Inference in Continuous Determinantal Processes »
Raja Hafiz Affandi · Emily Fox · Ben Taskar -
2013 Spotlight: Approximate Inference in Continuous Determinantal Processes »
Raja Hafiz Affandi · Emily Fox · Ben Taskar -
2012 Poster: Near-Optimal MAP Inference for Determinantal Point Processes »
Alex Kulesza · Jennifer A Gillenwater · Ben Taskar -
2012 Oral: Near-Optimal MAP Inference for Determinantal Point Processes »
Alex Kulesza · Jennifer A Gillenwater · Ben Taskar -
2010 Workshop: Coarse-to-Fine Learning and Inference »
Ben Taskar · David J Weiss · Benjamin J Sapp · Slav Petrov -
2010 Spotlight: Structured Determinantal Point Processes »
Alex Kulesza · Ben Taskar -
2010 Poster: Structured Determinantal Point Processes »
Alex Kulesza · Ben Taskar -
2010 Oral: Semi-Supervised Learning with Adversarially Missing Label Information »
Umar Syed · Ben Taskar -
2010 Session: Spotlights Session 3 »
Ben Taskar -
2010 Session: Oral Session 3 »
Ben Taskar -
2010 Poster: Semi-Supervised Learning with Adversarially Missing Label Information »
Umar Syed · Ben Taskar -
2010 Poster: Sidestepping Intractable Inference with Structured Ensemble Cascades »
David J Weiss · Benjamin J Sapp · Ben Taskar -
2009 Poster: Posterior vs Parameter Sparsity in Latent Variable Models »
Joao V Graca · Kuzman Ganchev · Ben Taskar · Fernando Pereira -
2009 Spotlight: Posterior vs Parameter Sparsity in Latent Variable Models »
Joao V Graca · Kuzman Ganchev · Ben Taskar · Fernando Pereira -
2009 Session: Oral Session 6: Theory, Optimization and Games »
Ben Taskar -
2007 Tutorial: Structured Prediction »
Ben Taskar