Skip to yearly menu bar Skip to main content


Workshop

Approximate inference - how far have we come?

Amir Globerson · David Sontag · Tommi Jaakkola

Westin: Alpine AB

Sat 13 Dec, 7:30 a.m. PST

Graphical models have become a key tool in representing multi-variatedistributions in many machine learning applications. They have beensuccessfully used in diverse fields such as machine-vision,bioinformatics, natural language processing, reinforcement learningand many others. Approximate inference in such models has attracted a great deal ofinterest in the learning community, and many algorithms have beenintroduced in recent years, with a specific emphasis on inference indiscrete variable models. These new methods explore new and excitinglinks between inference, combinatorial optimization, and convexduality. They provide new avenues for designing and understandingmessage passing algorithms, and can give theoretical guarantees whenused for learning graphical models. The goal of this workshop is to assess the current state of the fieldand explore new directions. We shall specifically be interested inunderstanding the following issues: 1. State of the field: What are the existing methods, and how do theyrelate to each other? Which problems can be solved using existingalgorithms, and which cannot? 2. Quality of approximations: What are the theoretical guaranteesregarding the output of the approximate inference algorithms (e.g.,upper or lower bounds on MAP or marginals, optimality within a givenfactor, certificates of optimality etc.). How do these depend on thecomplexity of the inference algorithms (i.e., what is the tradeoffbetween running time and accuracy)?3. Efficiency issues: What type of convergence guarantees do differentmessage passing algorithms have, and what can be proven about theirrunning time? Do certain methods dominate others in terms of theseguarantees? When are message passing algorithms better than other ""out-of-the-box"" convex optimization tools? 4. Scalability and applicability to real problems: How well do currentmethods scale to large-scale problems (e.g. in machine-vision andbioinformatics)? How hard is inference in typical real-worldproblems? Although inference is generally NP hard, this does not implythat a specific real problem cannot be solved exactly. The relativesuccess of approximate inference methods on some real-world problemssuggests that we are working in a regime of problems that are amenableto approximation. Can we characterize it? 5. Connections across fields: Approximate inference is closely linkedto problems in combinatorial optimization (e.g., maximization offunctions over graphs, or counting problems), and in convexoptimization (e.g., dual ascent methods). What techniques can we""import"" from these fields, and what from our advances in approximateinference can we ""export"" back? 6. Inference and learning: Learning the parameters of a graphicalmodel often requires inference as a subroutine. How should approximateinference be embedded into learning? Once a model is learned, whatinference method should be used, and how should it relate to the oneused during learning? What efficient algorithms exist for jointlearning and inference? 7. Continuous models: Many new approximate inference approaches havebeen developed in the context of discrete variable models. How canthese be applied to continuous valued or hybrid models?

Live content is unavailable. Log in and register to view live content