Timezone: »
Most distributed machine learning systems nowadays, including TensorFlow and CNTK, are built in a centralized fashion. One bottleneck of centralized algorithms lies on high communication cost on the central node. Motivated by this, we ask, can decentralized algorithms be faster than its centralized counterpart? Although decentralized PSGD (D-PSGD) algorithms have been studied by the control community, existing analysis and theory do not show any advantage over centralized PSGD (C-PSGD) algorithms, simply assuming the application scenario where only the decentralized network is available. In this paper, we study a D-PSGD algorithm and provide the first theoretical analysis that indicates a regime in which decentralized algorithms might outperform centralized algorithms for distributed stochastic gradient descent. This is because D-PSGD has comparable total computational complexities to C-PSGD but requires much less communication cost on the busiest node. We further conduct an empirical study to validate our theoretical analysis across multiple frameworks (CNTK and Torch), different network configurations, and computation platforms up to 112 GPUs. On network configurations with low bandwidth or high latency, D-PSGD can be up to one order of magnitude faster than its well-optimized centralized counterparts.
Author Information
Xiangru Lian (University of Rochester)
Ce Zhang (ETH Zurich)
Huan Zhang
Cho-Jui Hsieh (UCLA, Google)
Wei Zhang (IBM T.J.Watson Research Center)
BE Beijing Univ of Technology 2005 MSc Technical University of Denmark 2008 PhD University of Wisconsin, Madison 2013 All in computer science Published papers in ASPLOS, OOPSLA, OSDI, PLDI, IJCAI, ICDM, NIPS
Ji Liu (University of Rochester)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Poster: Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent »
Thu Dec 7th 02:30 -- 06:30 AM Room Pacific Ballroom #167
More from the Same Authors
-
2020 Poster: Spectral Temporal Graph Neural Network for Multivariate Time-series Forecasting »
Defu Cao · Yujing Wang · Juanyong Duan · Ce Zhang · Xia Zhu · Congrui Huang · Yunhai Tong · Bixiong Xu · Jing Bai · Jie Tong · Qi Zhang -
2020 Spotlight: Spectral Temporal Graph Neural Network for Multivariate Time-series Forecasting »
Defu Cao · Yujing Wang · Juanyong Duan · Ce Zhang · Xia Zhu · Congrui Huang · Yunhai Tong · Bixiong Xu · Jing Bai · Jie Tong · Qi Zhang -
2020 Poster: A Decentralized Parallel Algorithm for Training Generative Adversarial Nets »
Mingrui Liu · Wei Zhang · Youssef Mroueh · Xiaodong Cui · Jarret Ross · Tianbao Yang · Payel Das -
2020 Poster: ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training »
Chia-Yu Chen · Jiamin Ni · Songtao Lu · Xiaodong Cui · Pin-Yu Chen · Xiao Sun · Naigang Wang · Swagath Venkataramani · Vijayalakshmi (Viji) Srinivasan · Wei Zhang · Kailash Gopalakrishnan -
2020 Poster: Learning to Mutate with Hypergradient Guided Population »
Zhiqiang Tao · Yaliang Li · Bolin Ding · Ce Zhang · Jingren Zhou · Yun Fu -
2020 Poster: On Convergence of Nearest Neighbor Classifiers over Feature Transformations »
Luka Rimanic · Cedric Renggli · Bo Li · Ce Zhang -
2019 Poster: Hybrid 8-bit Floating Point (HFP8) Training and Inference for Deep Neural Networks »
Xiao Sun · Jungwook Choi · Chia-Yu Chen · Naigang Wang · Swagath Venkataramani · Vijayalakshmi (Viji) Srinivasan · Xiaodong Cui · Wei Zhang · Kailash Gopalakrishnan -
2018 Poster: Communication Compression for Decentralized Training »
Hanlin Tang · Shaoduo Gan · Ce Zhang · Tong Zhang · Ji Liu -
2018 Poster: Evolutionary Stochastic Gradient Descent for Optimization of Deep Neural Networks »
Xiaodong Cui · Wei Zhang · Zoltán Tüske · Michael Picheny -
2017 Poster: Scalable Demand-Aware Recommendation »
Jinfeng Yi · Cho-Jui Hsieh · Kush Varshney · Lijun Zhang · Yao Li -
2017 Poster: A Greedy Approach for Budgeted Maximum Inner Product Search »
Hsiang-Fu Yu · Cho-Jui Hsieh · Qi Lei · Inderjit Dhillon -
2015 Poster: Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization »
Xiangru Lian · Yijun Huang · Yuncheng Li · Ji Liu -
2015 Spotlight: Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization »
Xiangru Lian · Yijun Huang · Yuncheng Li · Ji Liu