Timezone: »
In distributed optimization and distributed numerical linear algebra,
we often encounter an inversion bias: if we want to compute a
quantity that depends on the inverse of a sum of distributed matrices,
then the sum of the inverses does not equal the inverse of the sum.
An example of this occurs in distributed Newton's method, where we
wish to compute (or implicitly work with) the inverse Hessian
multiplied by the gradient.
In this case, locally computed estimates are biased, and so taking a
uniform average will not recover the correct solution.
To address this, we propose determinantal averaging, a new
approach for correcting the inversion bias.
This approach involves reweighting the local estimates of the Newton's
step proportionally to the determinant of the local Hessian estimate,
and then averaging them together to obtain an improved global
estimate. This method provides the first known distributed Newton step that is
asymptotically consistent, i.e., it recovers the exact step in
the limit as the number of distributed partitions grows to infinity.
To show this, we develop new expectation identities and moment bounds
for the determinant and adjugate of a random matrix.
Determinantal averaging can be applied not only to Newton's method,
but to computing any quantity that is a linear tranformation of a
matrix inverse, e.g., taking a trace of the inverse covariance matrix,
which is used in data uncertainty quantification.
Author Information
Michal Derezinski (UC Berkeley)
Michael Mahoney (UC Berkeley)
More from the Same Authors
-
2021 Spotlight: Newton-LESS: Sparsification without Trade-offs for the Sketched Newton Update »
Michal Derezinski · Jonathan Lacotte · Mert Pilanci · Michael Mahoney -
2023 Poster: Characterizing Scaling and Transfer Learning of Neural Networks for Scientific Machine Learning »
Shashank Subramanian · Peter Harrington · Kurt Keutzer · Wahid Bhimji · Dmitriy Morozov · Michael Mahoney · Amir Gholami -
2023 Poster: Temperature Balancing, Layer-wise Weight Analysis, and Neural Network Training »
Yefan Zhou · TIANYU PANG · Keqin Liu · charles martin · Michael Mahoney · Yaoqing Yang -
2023 Poster: When are ensembles really effective? »
Ryan Theisen · Hyunsuk Kim · Yaoqing Yang · Liam Hodgkinson · Michael Mahoney -
2023 Poster: A Heavy-Tailed Algebra for Probabilistic Programming »
Feynman Liang · Liam Hodgkinson · Michael Mahoney -
2023 Poster: Big Little Transformer Decoder »
Sehoon Kim · Karttikeya Mangalam · Suhong Moon · Jitendra Malik · Michael Mahoney · Amir Gholami · Kurt Keutzer -
2023 Tutorial: Recent and Upcoming Developments in Randomized Numerical Linear Algebra for ML »
Michal Derezinski · Michael Mahoney -
2023 Workshop: Heavy Tails in ML: Structure, Stability, Dynamics »
Mert Gurbuzbalaban · Stefanie Jegelka · Michael Mahoney · Umut Simsekli -
2022 Poster: A Fast Post-Training Pruning Framework for Transformers »
Woosuk Kwon · Sehoon Kim · Michael Mahoney · Joseph Hassoun · Kurt Keutzer · Amir Gholami -
2022 Poster: Squeezeformer: An Efficient Transformer for Automatic Speech Recognition »
Sehoon Kim · Amir Gholami · Albert Shaw · Nicholas Lee · Karttikeya Mangalam · Jitendra Malik · Michael Mahoney · Kurt Keutzer -
2022 Poster: LSAR: Efficient Leverage Score Sampling Algorithm for the Analysis of Big Time Series Data »
Ali Eshragh · Fred Roosta · Asef Nazari · Michael Mahoney -
2021 : Q&A with Michael Mahoney »
Michael Mahoney -
2021 : Putting Randomized Matrix Algorithms in LAPACK, and Connections with Second-order Stochastic Optimization, Michael Mahoney »
Michael Mahoney -
2021 Poster: Newton-LESS: Sparsification without Trade-offs for the Sketched Newton Update »
Michal Derezinski · Jonathan Lacotte · Mert Pilanci · Michael Mahoney -
2021 Poster: Noisy Recurrent Neural Networks »
Soon Hoe Lim · N. Benjamin Erichson · Liam Hodgkinson · Michael Mahoney -
2021 Poster: Hessian Eigenspectra of More Realistic Nonlinear Models »
Zhenyu Liao · Michael Mahoney -
2021 Poster: Characterizing possible failure modes in physics-informed neural networks »
Aditi Krishnapriyan · Amir Gholami · Shandian Zhe · Robert Kirby · Michael Mahoney -
2021 Poster: Taxonomizing local versus global structure in neural network loss landscapes »
Yaoqing Yang · Liam Hodgkinson · Ryan Theisen · Joe Zou · Joseph Gonzalez · Kannan Ramchandran · Michael Mahoney -
2021 Poster: Stateful ODE-Nets using Basis Function Expansions »
Alejandro Queiruga · N. Benjamin Erichson · Liam Hodgkinson · Michael Mahoney -
2021 Oral: Hessian Eigenspectra of More Realistic Nonlinear Models »
Zhenyu Liao · Michael Mahoney -
2020 Poster: Boundary thickness and robustness in learning models »
Yaoqing Yang · Rajiv Khanna · Yaodong Yu · Amir Gholami · Kurt Keutzer · Joseph Gonzalez · Kannan Ramchandran · Michael Mahoney -
2020 Poster: Debiasing Distributed Second Order Optimization with Surrogate Sketching and Scaled Regularization »
Michal Derezinski · Burak Bartan · Mert Pilanci · Michael Mahoney -
2020 Poster: Sampling from a k-DPP without looking at all items »
Daniele Calandriello · Michal Derezinski · Michal Valko -
2020 Spotlight: Sampling from a k-DPP without looking at all items »
Daniele Calandriello · Michal Derezinski · Michal Valko -
2020 Poster: Exact expressions for double descent and implicit regularization via surrogate random design »
Michal Derezinski · Feynman Liang · Michael Mahoney -
2020 Poster: Improved guarantees and a multiple-descent curve for Column Subset Selection and the Nystrom method »
Michal Derezinski · Rajiv Khanna · Michael Mahoney -
2020 Poster: Precise expressions for random projections: Low-rank approximation and randomized Newton »
Michal Derezinski · Feynman Liang · Zhenyu Liao · Michael Mahoney -
2020 Oral: Improved guarantees and a multiple-descent curve for Column Subset Selection and the Nystrom method »
Michal Derezinski · Rajiv Khanna · Michael Mahoney -
2020 Poster: A random matrix analysis of random Fourier features: beyond the Gaussian kernel, a precise phase transition, and the corresponding double descent »
Zhenyu Liao · Romain Couillet · Michael Mahoney -
2020 Poster: A Statistical Framework for Low-bitwidth Training of Deep Neural Networks »
Jianfei Chen · Yu Gai · Zhewei Yao · Michael Mahoney · Joseph Gonzalez -
2019 : Final remarks »
Anastasios Kyrillidis · Albert Berahas · Fred Roosta · Michael Mahoney -
2019 Workshop: Beyond first order methods in machine learning systems »
Anastasios Kyrillidis · Albert Berahas · Fred Roosta · Michael Mahoney -
2019 : Opening Remarks »
Anastasios Kyrillidis · Albert Berahas · Fred Roosta · Michael Mahoney -
2019 Poster: ANODEV2: A Coupled Neural ODE Framework »
Tianjun Zhang · Zhewei Yao · Amir Gholami · Joseph Gonzalez · Kurt Keutzer · Michael Mahoney · George Biros -
2019 Poster: Exact sampling of determinantal point processes with sublinear time preprocessing »
Michal Derezinski · Daniele Calandriello · Michal Valko -
2018 Poster: GIANT: Globally Improved Approximate Newton Method for Distributed Optimization »
Shusen Wang · Fred Roosta · Peng Xu · Michael Mahoney -
2018 Poster: Leveraged volume sampling for linear regression »
Michal Derezinski · Manfred K. Warmuth · Daniel Hsu -
2018 Spotlight: Leveraged volume sampling for linear regression »
Michal Derezinski · Manfred K. Warmuth · Daniel Hsu -
2018 Poster: Hessian-based Analysis of Large Batch Training and Robustness to Adversaries »
Zhewei Yao · Amir Gholami · Qi Lei · Kurt Keutzer · Michael Mahoney -
2017 Poster: Unbiased estimates for linear regression via volume sampling »
Michal Derezinski · Manfred K. Warmuth -
2017 Spotlight: Unbiased estimates for linear regression via volume sampling »
Michal Derezinski · Manfred K. Warmuth -
2017 Poster: Union of Intersections (UoI) for Interpretable Data Driven Discovery and Prediction »
Kristofer Bouchard · Alejandro Bujan · Farbod Roosta-Khorasani · Shashanka Ubaru · Mr. Prabhat · Antoine Snijders · Jian-Hua Mao · Edward Chang · Michael W Mahoney · Sharmodeep Bhattacharya -
2016 Poster: Feature-distributed sparse regression: a screen-and-clean approach »
Jiyan Yang · Michael Mahoney · Michael Saunders · Yuekai Sun -
2016 Poster: Sub-sampled Newton Methods with Non-uniform Sampling »
Peng Xu · Jiyan Yang · Farbod Roosta-Khorasani · Christopher RĂ© · Michael Mahoney -
2015 : Challenges in Multiresolution Methods for Graph-based Learning »
Michael Mahoney -
2015 : Using Local Spectral Methods in Theory and in Practice »
Michael Mahoney -
2015 Poster: Fast Randomized Kernel Ridge Regression with Statistical Guarantees »
Ahmed Alaoui · Michael Mahoney -
2014 Poster: The limits of squared Euclidean distance regularization »
Michal Derezinski · Manfred K. Warmuth -
2014 Spotlight: The limits of squared Euclidean distance regularization »
Michal Derezinski · Manfred K. Warmuth -
2013 Workshop: Large Scale Matrix Analysis and Inference »
Reza Zadeh · Gunnar Carlsson · Michael Mahoney · Manfred K. Warmuth · Wouter M Koolen · Nati Srebro · Satyen Kale · Malik Magdon-Ismail · Ashish Goel · Matei A Zaharia · David Woodruff · Ioannis Koutis · Benjamin Recht -
2012 Poster: Semi-supervised Eigenvectors for Locally-biased Learning »
Toke Jansen Hansen · Michael W Mahoney -
2011 Workshop: Sparse Representation and Low-rank Approximation »
Ameet S Talwalkar · Lester W Mackey · Mehryar Mohri · Michael W Mahoney · Francis Bach · Mike Davies · Remi Gribonval · Guillaume R Obozinski -
2011 Poster: Regularized Laplacian Estimation and Fast Eigenvector Approximation »
Patrick O Perry · Michael W Mahoney -
2010 Workshop: Low-rank Methods for Large-scale Machine Learning »
Arthur Gretton · Michael W Mahoney · Mehryar Mohri · Ameet S Talwalkar -
2010 Poster: CUR from a Sparse Optimization Viewpoint »
Jacob Bien · Ya Xu · Michael W Mahoney -
2009 Poster: Unsupervised Feature Selection for the $k$-means Clustering Problem »
Christos Boutsidis · Michael W Mahoney · Petros Drineas