Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Machine Learning and Compression

A Tighter Complexity Analysis of SparseGPT

Xiaoyu Li · Yingyu Liang · Zhenmei Shi · Zhao Song


Abstract: In this work, we improved the analysis of the running time of SparseGPT [Frantar, Alistarh ICML 2023] from O(d3) to O(dω+d2+a+o(1)+d1+ω(1,1,a)a) for any a[0,1], where ω is the exponent of matrix multiplication. In particular, for the current ω2.371 [Alman, Duan, Williams, Xu, Xu, Zhou 2024], our running times boil down to O(d2.53). This running time is due to the analysis of the lazy update behavior in iterative maintenance problems, such as [Deng, Song, Weinstein 2022; Brand, Song, Zhou ICML 2024].

Chat is not available.