Timezone: »
Poster
Sparse Inverse Covariance Selection via Alternating Linearization Methods
Katya Scheinberg · Shiqian Ma · Donald Goldfarb
Gaussian graphical models are of great interest in statistical learning.
Because the conditional independencies between different nodes correspond to zero entries in the inverse covariance matrix of the Gaussian distribution, one can learn the structure of the graph by estimating a sparse inverse covariance matrix from sample data, by solving a convex maximum likelihood problem with an $\ell_1$-regularization term.
In this paper, we propose a first-order
method based on an alternating linearization technique that exploits the problem's special structure; in particular, the subproblems solved in each iteration have closed-form solutions. Moreover, our algorithm obtains an $\epsilon$-optimal solution in $O(1/\epsilon)$ iterations. Numerical experiments on both synthetic and real data from gene association networks show that a practical version of
this algorithm outperforms other competitive algorithms.
Author Information
Katya Scheinberg (Cornell)
Shiqian Ma (Columbia University)
Donald Goldfarb (Columbia University)
More from the Same Authors
-
2021 Spotlight: Tensor Normal Training for Deep Learning Models »
Yi Ren · Donald Goldfarb -
2021 : High Probability Step Size Lower Bound for Adaptive Stochastic Optimization »
Katya Scheinberg · Miaolan Xie -
2022 : Stochastic Adaptive Regularization Method with Cubics: A High Probability Complexity Bound »
Katya Scheinberg · Miaolan Xie -
2022 : Katya Scheinberg, Stochastic Oracles and Where to Find Them »
Katya Scheinberg -
2022 : Efficient Second-Order Stochastic Methods for Machine Learning »
Donald Goldfarb -
2021 Workshop: OPT 2021: Optimization for Machine Learning »
Courtney Paquette · Quanquan Gu · Oliver Hinder · Katya Scheinberg · Sebastian Stich · Martin Takac -
2021 Poster: High Probability Complexity Bounds for Line Search Based on Stochastic Oracles »
Billy Jin · Katya Scheinberg · Miaolan Xie -
2021 Poster: Tensor Normal Training for Deep Learning Models »
Yi Ren · Donald Goldfarb -
2020 : Invited speaker: Practical Kronecker-factored BFGS and L-BFGS methods for training deep neural networks, Donald Goldfarb »
Donald Goldfarb -
2020 Poster: Practical Quasi-Newton Methods for Training Deep Neural Networks »
Donald Goldfarb · Yi Ren · Achraf Bahamou -
2020 Spotlight: Practical Quasi-Newton Methods for Training Deep Neural Networks »
Donald Goldfarb · Yi Ren · Achraf Bahamou -
2019 : Analysis of linear search methods for various gradient approximation schemes for noisy derivative free optimization. »
Katya Scheinberg -
2019 : Economical use of second-order information in training machine learning models »
Donald Goldfarb -
2019 Poster: Leader Stochastic Gradient Descent for Distributed Training of Deep Learning Models »
Yunfei Teng · Wenbo Gao · François Chalus · Anna Choromanska · Donald Goldfarb · Adrian Weller