Timezone: »
High network communication cost for synchronizing gradients and parameters is the well-known bottleneck of distributed training. In this work, we propose TernGrad that uses ternary gradients to accelerate distributed deep learning in data parallelism. Our approach requires only three numerical levels {-1,0,1} which can aggressively reduce the communication time. We mathematically prove the convergence of TernGrad under the assumption of a bound on gradients. Guided by the bound, we propose layer-wise ternarizing and gradient clipping to improve its convergence. Our experiments show that applying TernGrad on AlexNet doesn’t incur any accuracy loss and can even improve accuracy. The accuracy loss of GoogLeNet induced by TernGrad is less than 2% on average. Finally, a performance model is proposed to study the scalability of TernGrad. Experiments show significant speed gains for various deep neural networks.
Author Information
Wei Wen (Duke University)
I’m a Ph.D. student in Duke University. My research interests include scalable deep learning, model compression, structure learning and deep neural network understanding.
Cong Xu (Hewlett Packard Labs)
Feng Yan (University of Nevada, Reno)
Chunpeng Wu (Duke University)
Yandan Wang (University of Pittsburgh)
Yiran Chen (Duke University)
Hai Li (Duke University)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Poster: TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning »
Thu. Dec 7th 02:30 -- 06:30 AM Room Pacific Ballroom #127
More from the Same Authors
-
2022 : Fine-grain Inference on Out-of-Distribution Data with Hierarchical Classification »
Randolph Linderman · Jingyang Zhang · Nathan Inkawhich · Hai Li · Yiran Chen -
2023 Poster: FLSL: Feature-level Self-supervised Learning »
Qing Su · Anton Netchaev · Hai Li · Shihao Ji -
2022 Competition: AutoML Decathlon: Diverse Tasks, Modern Methods, and Efficiency at Scale »
Samuel Guo · Cong Xu · Nicholas Roberts · Misha Khodak · Junhong Shen · Evan Sparks · Ameet Talwalkar · Yuriy Nevmyvaka · Frederic Sala · Anderson Schneider -
2022 Poster: Why do We Need Large Batchsizes in Contrastive Learning? A Gradient-Bias Perspective »
Changyou Chen · Jianyi Zhang · Yi Xu · Liqun Chen · Jiali Duan · Yiran Chen · Son Tran · Belinda Zeng · Trishul Chilimbi -
2021 Poster: SimiGrad: Fine-Grained Adaptive Batching for Large Scale Training using Gradient Similarity Measurement »
Heyang Qin · Samyam Rajbhandari · Olatunji Ruwase · Feng Yan · Lei Yang · Yuxiong He -
2021 Poster: FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective »
Jingwei Sun · Ang Li · Louis DiValentin · Amin Hassanzadeh · Yiran Chen · Hai Li -
2020 Poster: DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles »
Huanrui Yang · Jingyang Zhang · Hongliang Dong · Nathan Inkawhich · Andrew Gardner · Andrew Touchet · Wesley Wilkes · Heath Berry · Hai Li -
2020 Poster: Perturbing Across the Feature Hierarchy to Improve Standard and Strict Blackbox Attack Transferability »
Nathan Inkawhich · Kevin J Liang · Binghui Wang · Matthew Inkawhich · Lawrence Carin · Yiran Chen -
2020 Oral: DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles »
Huanrui Yang · Jingyang Zhang · Hongliang Dong · Nathan Inkawhich · Andrew Gardner · Andrew Touchet · Wesley Wilkes · Heath Berry · Hai Li -
2019 Poster: Defending Neural Backdoors via Generative Distribution Modeling »
Ximing Qiao · Yukun Yang · Hai Li -
2018 Poster: Generalized Inverse Optimization through Online Learning »
Chaosheng Dong · Yiran Chen · Bo Zeng -
2016 Poster: Learning Structured Sparsity in Deep Neural Networks »
Wei Wen · Chunpeng Wu · Yandan Wang · Yiran Chen · Hai Li