Approximate inference - how far have we come?
Amir Globerson · David Sontag · Tommi Jaakkola

Sat Dec 13th 07:30 AM -- 07:00 PM @ Westin: Alpine AB
Event URL: »

Graphical models have become a key tool in representing multi-variatedistributions in many machine learning applications. They have beensuccessfully used in diverse fields such as machine-vision,bioinformatics, natural language processing, reinforcement learningand many others. Approximate inference in such models has attracted a great deal ofinterest in the learning community, and many algorithms have beenintroduced in recent years, with a specific emphasis on inference indiscrete variable models. These new methods explore new and excitinglinks between inference, combinatorial optimization, and convexduality. They provide new avenues for designing and understandingmessage passing algorithms, and can give theoretical guarantees whenused for learning graphical models. The goal of this workshop is to assess the current state of the fieldand explore new directions. We shall specifically be interested inunderstanding the following issues: 1. State of the field: What are the existing methods, and how do theyrelate to each other? Which problems can be solved using existingalgorithms, and which cannot? 2. Quality of approximations: What are the theoretical guaranteesregarding the output of the approximate inference algorithms (e.g.,upper or lower bounds on MAP or marginals, optimality within a givenfactor, certificates of optimality etc.). How do these depend on thecomplexity of the inference algorithms (i.e., what is the tradeoffbetween running time and accuracy)?3. Efficiency issues: What type of convergence guarantees do differentmessage passing algorithms have, and what can be proven about theirrunning time? Do certain methods dominate others in terms of theseguarantees? When are message passing algorithms better than other ""out-of-the-box"" convex optimization tools? 4. Scalability and applicability to real problems: How well do currentmethods scale to large-scale problems (e.g. in machine-vision andbioinformatics)? How hard is inference in typical real-worldproblems? Although inference is generally NP hard, this does not implythat a specific real problem cannot be solved exactly. The relativesuccess of approximate inference methods on some real-world problemssuggests that we are working in a regime of problems that are amenableto approximation. Can we characterize it? 5. Connections across fields: Approximate inference is closely linkedto problems in combinatorial optimization (e.g., maximization offunctions over graphs, or counting problems), and in convexoptimization (e.g., dual ascent methods). What techniques can we""import"" from these fields, and what from our advances in approximateinference can we ""export"" back? 6. Inference and learning: Learning the parameters of a graphicalmodel often requires inference as a subroutine. How should approximateinference be embedded into learning? Once a model is learned, whatinference method should be used, and how should it relate to the oneused during learning? What efficient algorithms exist for jointlearning and inference? 7. Continuous models: Many new approximate inference approaches havebeen developed in the context of discrete variable models. How canthese be applied to continuous valued or hybrid models?

Author Information

Amir Globerson (Tel Aviv University, Google)

Amir Globerson is senior lecturer at the School of Engineering and Computer Science at the Hebrew University. He received a PhD in computational neuroscience from the Hebrew University, and was a Rothschild postdoctoral fellow at MIT. He joined the Hebrew University in 2008. His research interests include graphical models and probabilistic inference, convex optimization, robust learning and natural language processing.

David Sontag (MIT)
Tommi Jaakkola (MIT)

Tommi Jaakkola is a professor of Electrical Engineering and Computer Science at MIT. He received an M.Sc. degree in theoretical physics from Helsinki University of Technology, and Ph.D. from MIT in computational neuroscience. Following a Sloan postdoctoral fellowship in computational molecular biology, he joined the MIT faculty in 1998. His research interests include statistical inference, graphical models, and large scale modern estimation problems with predominantly incomplete data.

More from the Same Authors