Poster
in
Workshop: Workshop on Machine Learning and Compression
A Tighter Complexity Analysis of SparseGPT
Xiaoyu Li · Yingyu Liang · Zhenmei Shi · Zhao Song
Abstract:
In this work, we improved the analysis of the running time of SparseGPT [Frantar, Alistarh ICML 2023] from to for any , where is the exponent of matrix multiplication. In particular, for the current [Alman, Duan, Williams, Xu, Xu, Zhou 2024], our running times boil down to . This running time is due to the analysis of the lazy update behavior in iterative maintenance problems, such as [Deng, Song, Weinstein 2022; Brand, Song, Zhou ICML 2024].
Chat is not available.