Timezone: »
Modern distributed training of machine learning models often suffers from high communication overhead for synchronizing stochastic gradients and model parameters. In this paper, to reduce the communication complexity, we propose \emph{double quantization}, a general scheme for quantizing both model parameters and gradients. Three communication-efficient algorithms are proposed based on this general scheme. Specifically, (i) we propose a low-precision algorithm AsyLPG with asynchronous parallelism, (ii) we explore integrating gradient sparsification with double quantization and develop Sparse-AsyLPG, (iii) we show that double quantization can be accelerated by the momentum technique and design accelerated AsyLPG. We establish rigorous performance guarantees for the algorithms, and conduct experiments on a multi-server test-bed with real-world datasets to demonstrate that our algorithms can effectively save transmitted bits without performance degradation, and significantly outperform existing methods with either model parameter or gradient quantization.
Author Information
Yue Yu (Tsinghua University)
Jiaxiang Wu (Tencent AI Lab)
Longbo Huang (IIIS, Tsinghua Univeristy)
More from the Same Authors
-
2020 Poster: Revisiting Parameter Sharing for Automatic Neural Channel Number Search »
Jiaxing Wang · Haoli Bai · Jiaxiang Wu · Xupeng Shi · Junzhou Huang · Irwin King · Michael Lyu · Jian Cheng -
2020 Poster: Softmax Deep Double Deterministic Policy Gradients »
Ling Pan · Qingpeng Cai · Longbo Huang -
2020 Poster: Restless-UCB, an Efficient and Low-complexity Algorithm for Online Restless Bandits »
Siwei Wang · Longbo Huang · John C. S. Lui -
2018 Poster: Multi-armed Bandits with Compensation »
Siwei Wang · Longbo Huang