Timezone: »
Greedy algorithms have long been a workhorse for learning graphical models, and more broadly for learning statistical models with sparse structure. In the context of learning directed acyclic graphs, greedy algorithms are popular despite their worst-case exponential runtime. In practice, however, they are very efficient. We provide new insight into this phenomenon by studying a general greedy score-based algorithm for learning DAGs. Unlike edge-greedy algorithms such as the popular GES and hill-climbing algorithms, our approach is vertex-greedy and requires at most a polynomial number of score evaluations. We then show how recent polynomial-time algorithms for learning DAG models are a special case of this algorithm, thereby illustrating how these order-based algorithms can be rigourously interpreted as score-based algorithms. This observation suggests new score functions and optimality conditions based on the duality between Bregman divergences and exponential families, which we explore in detail. Explicit sample and computational complexity bounds are derived. Finally, we provide extensive experiments suggesting that this algorithm indeed optimizes the score in a variety of settings.
Author Information
Goutham Rajendran (University of Chicago)
I obtained my CS PhD from UChicago. Recently, I've been actively working on causal representation learning and generative models. Some of my recent side projects were on NeRF (Computer vision) and Automatic Speech Recognition. I also have extensive competitive programming experience and a track publication record.
Bohdan Kivva (University of Chicago)
Ming Gao (the University of Chicago)
Bryon Aragam (University of Chicago)
More from the Same Authors
-
2022 Spotlight: Identifiability of deep generative models without auxiliary information »
Bohdan Kivva · Goutham Rajendran · Pradeep Ravikumar · Bryon Aragam -
2022 Poster: DAGMA: Learning DAGs via M-matrices and a Log-Determinant Acyclicity Characterization »
Kevin Bello · Bryon Aragam · Pradeep Ravikumar -
2022 Poster: Identifiability of deep generative models without auxiliary information »
Bohdan Kivva · Goutham Rajendran · Pradeep Ravikumar · Bryon Aragam -
2022 Poster: Sub-exponential time Sum-of-Squares lower bounds for Principal Components Analysis »
Aaron Potechin · Goutham Rajendran -
2021 Poster: Learning latent causal graphs via mixture oracles »
Bohdan Kivva · Goutham Rajendran · Pradeep Ravikumar · Bryon Aragam -
2021 Poster: Efficient Bayesian network structure learning via local Markov boundary search »
Ming Gao · Bryon Aragam -
2020 Poster: A polynomial-time algorithm for learning nonparametric causal graphs »
Ming Gao · Yi Ding · Bryon Aragam -
2019 Poster: Learning Sample-Specific Models with Low-Rank Personalized Regression »
Ben Lengerich · Bryon Aragam · Eric Xing -
2019 Poster: Globally optimal score-based learning of directed acyclic graphs in high-dimensions »
Bryon Aragam · Arash Amini · Qing Zhou -
2018 Poster: The Sample Complexity of Semi-Supervised Learning with Nonparametric Mixture Models »
Chen Dan · Liu Leqi · Bryon Aragam · Pradeep Ravikumar · Eric Xing -
2018 Poster: DAGs with NO TEARS: Continuous Optimization for Structure Learning »
Xun Zheng · Bryon Aragam · Pradeep Ravikumar · Eric Xing -
2018 Spotlight: DAGs with NO TEARS: Continuous Optimization for Structure Learning »
Xun Zheng · Bryon Aragam · Pradeep Ravikumar · Eric Xing