Timezone: »
Poster
Learning Gaussian Graphical Models with Observed or Latent FVSs
Ying Liu · Alan S Willsky
Sun Dec 08 02:00 PM -- 06:00 PM (PST) @ Harrah's Special Events Center, 2nd Floor
Gaussian Graphical Models (GGMs) or Gauss Markov random fields are widely used in many applications, and the trade-off between the modeling capacity and the efficiency of learning and inference has been an important research problem. In this paper, we study the family of GGMs with small feedback vertex sets (FVSs), where an FVS is a set of nodes whose removal breaks all the cycles. Exact inference such as computing the marginal distributions and the partition function has complexity $O(k^{2}n)$ using message-passing algorithms, where k is the size of the FVS, and n is the total number of nodes. We propose efficient structure learning algorithms for two cases: 1) All nodes are observed, which is useful in modeling social or flight networks where the FVS nodes often correspond to a small number of high-degree nodes, or hubs, while the rest of the networks is modeled by a tree. Regardless of the maximum degree, without knowing the full graph structure, we can exactly compute the maximum likelihood estimate in $O(kn^2+n^2\log n)$ if the FVS is known or in polynomial time if the FVS is unknown but has bounded size. 2) The FVS nodes are latent variables, where structure learning is equivalent to decomposing a inverse covariance matrix (exactly or approximately) into the sum of a tree-structured matrix and a low-rank matrix. By incorporating efficient inference into the learning steps, we can obtain a learning algorithm using alternating low-rank correction with complexity $O(kn^{2}+n^{2}\log n)$ per iteration. We also perform experiments using both synthetic data as well as real data of flight delays to demonstrate the modeling capacity with FVSs of various sizes. We show that empirically the family of GGMs of size $O(\log n)$ strikes a good balance between the modeling capacity and the efficiency.
Author Information
Ying Liu (Google Inc.)
Alan S Willsky (Massachusetts Institute of Technology)
More from the Same Authors
-
2013 Poster: Analyzing Hogwild Parallel Gaussian Gibbs Sampling »
Matthew Johnson · James Saunderson · Alan S Willsky -
2011 Poster: High-Dimensional Graphical Model Selection: Tractable Graph Families and Necessary Conditions »
Animashree Anandkumar · Vincent Tan · Alan S Willsky -
2011 Oral: High-Dimensional Graphical Model Selection: Tractable Graph Families and Necessary Conditions »
Animashree Anandkumar · Vincent Tan · Alan S Willsky -
2009 Poster: Sharing Features among Dynamical Systems with Beta Processes »
Emily Fox · Erik Sudderth · Michael Jordan · Alan S Willsky -
2009 Oral: Sharing Features among Dynamical Systems with Beta Processes »
Emily Fox · Erik Sudderth · Michael Jordan · Alan S Willsky -
2008 Poster: Nonparametric Bayesian Learning of Switching Linear Dynamical Systems »
Emily Fox · Erik Sudderth · Michael Jordan · Alan S Willsky -
2008 Spotlight: Nonparametric Bayesian Learning of Switching Linear Dynamical Systems »
Emily Fox · Erik Sudderth · Michael Jordan · Alan S Willsky -
2007 Spotlight: Message Passing for Max-weight Independent Set »
Sujay Sanghavi · Devavrat Shah · Alan S Willsky -
2007 Poster: Message Passing for Max-weight Independent Set »
Sujay Sanghavi · Devavrat Shah · Alan S Willsky -
2007 Poster: Adaptive Embedded Subgraph Algorithms using Walk-Sum Analysis »
Venkat Chandrasekaran · Jason K Johnson · Alan S Willsky -
2007 Poster: Linear programming analysis of loopy belief propagation for weighted matching »
Sujay Sanghavi · Dmitry Malioutov · Alan S Willsky -
2007 Poster: Loop Series and Bethe Variational Bounds in Attractive Graphical Models »
Erik Sudderth · Martin J Wainwright · Alan S Willsky