Poster
Byzantine Stochastic Gradient Descent
Dan Alistarh · Zeyuan Allen-Zhu · Jerry Li
Room 517 AB #164
Keywords: [ Learning Theory ] [ Online Learning ]
[
Abstract
]
Abstract:
This paper studies the problem of distributed stochastic optimization in an adversarial setting where, out of machines which allegedly compute stochastic gradients every iteration, an -fraction are Byzantine, and may behave adversarially. Our main result is a variant of stochastic gradient descent (SGD) which finds -approximate minimizers of convex functions in iterations. In contrast, traditional mini-batch SGD needs iterations, but cannot tolerate Byzantine failures.
Further, we provide a lower bound showing that, up to logarithmic factors, our algorithm is information-theoretically optimal both in terms of sample complexity and time complexity.
Live content is unavailable. Log in and register to view live content