Timezone: »
Current distributed learning systems suffer from serious performance degradation under Byzantine attacks. This paper proposes Election Coding, a coding-theoretic framework to guarantee Byzantine-robustness for distributed learning algorithms based on signed stochastic gradient descent (SignSGD) that minimizes the worker-master communication load. The suggested framework explores new information-theoretic limits of finding the majority opinion when some workers could be attacked by adversary, and paves the road to implement robust and communication-efficient distributed learning algorithms. Under this framework, we construct two types of codes, random Bernoulli codes and deterministic algebraic codes, that tolerate Byzantine attacks with a controlled amount of computational redundancy and guarantee convergence in general non-convex scenarios. For the Bernoulli codes, we provide an upper bound on the error probability in estimating the signs of the true gradients, which gives useful insights into code design for Byzantine tolerance. The proposed deterministic codes are proven to perfectly tolerate arbitrary Byzantine attacks. Experiments on real datasets confirm that the suggested codes provide substantial improvement in Byzantine tolerance of distributed learning systems employing SignSGD.
Author Information
Jy-yong Sohn (KAIST)
Dong-Jun Han (KAIST)
Beongjun Choi (KAIST)
Jaekyun Moon (Korea Advanced Institute of Science and Technology)
More from the Same Authors
-
2021 Poster: Few-Round Learning for Federated Learning »
Younghyun Park · Dong-Jun Han · Do-Yeon Kim · Jun Seo · Jaekyun Moon -
2021 Poster: Sageflow: Robust Federated Learning against Both Stragglers and Adversaries »
Jungwuk Park · Dong-Jun Han · Minseok Choi · Jaekyun Moon -
2020 Poster: Attack of the Tails: Yes, You Really Can Backdoor Federated Learning »
Hongyi Wang · Kartik Sreenivasan · Shashank Rajput · Harit Vishwakarma · Saurabh Agarwal · Jy-yong Sohn · Kangwook Lee · Dimitris Papailiopoulos