Skip to yearly menu bar Skip to main content


Poster

Byzantine Stochastic Gradient Descent

Dan Alistarh · Zeyuan Allen-Zhu · Jerry Li

Room 517 AB #164

Keywords: [ Learning Theory ] [ Online Learning ]


Abstract: This paper studies the problem of distributed stochastic optimization in an adversarial setting where, out of m machines which allegedly compute stochastic gradients every iteration, an α-fraction are Byzantine, and may behave adversarially. Our main result is a variant of stochastic gradient descent (SGD) which finds ε-approximate minimizers of convex functions in T=O~(1ε2m+α2ε2) iterations. In contrast, traditional mini-batch SGD needs T=O(1ε2m) iterations, but cannot tolerate Byzantine failures. Further, we provide a lower bound showing that, up to logarithmic factors, our algorithm is information-theoretically optimal both in terms of sample complexity and time complexity.

Live content is unavailable. Log in and register to view live content