Timezone: »

Agreement-Based Learning
Percy Liang · Dan Klein · Michael Jordan

Wed Dec 05 09:50 AM -- 10:00 AM (PST) @ None

The learning of probabilistic models with many hidden variables and non-decomposable dependencies is an important but challenging problem. In contrast to traditional approaches based on approximate inference in a single intractable model, our approach is to train a set of tractable component models by encouraging them to agree on the hidden variables. This allows us to capture non-decomposable aspects of the data while still maintaining tractability. We exhibit an objective function for our approach, derive EM-style algorithms for parameter estimation, and demonstrate their effectiveness on three challenging real-world learning tasks.

Author Information

Percy Liang (Stanford University)
Dan Klein (UC Berkeley)
Michael Jordan (UC Berkeley)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors