Skip to yearly menu bar Skip to main content


Poster

DETOX: A Redundancy-based Framework for Faster and More Robust Gradient Aggregation

Shashank Rajput · Hongyi Wang · Zachary Charles · Dimitris Papailiopoulos

East Exhibition Hall B, C #26

Keywords: [ Algorithms ] [ Large Scale Learning ] [ Adversarial Learning; Optimization ]


Abstract:

To improve the resilience of distributed training to worst-case, or Byzantine node failures, several recent methods have replaced gradient averaging with robust aggregation methods. Such techniques can have high computational costs, often quadratic in the number of compute nodes, and only have limited robustness guarantees. Other methods have instead used redundancy to guarantee robustness, but can only tolerate limited numbers of Byzantine failures. In this work, we present DETOX, a Byzantine-resilient distributed training framework that combines algorithmic redundancy with robust aggregation. DETOX operates in two steps, a filtering step that uses limited redundancy to significantly reduce the effect of Byzantine nodes, and a hierarchical aggregation step that can be used in tandem with any state-of-the-art robust aggregation method. We show theoretically that this leads to a substantial increase in robustness, and has a per iteration runtime that can be nearly linear in the number of compute nodes. We provide extensive experiments over real distributed setups across a variety of large-scale machine learning tasks, showing that DETOX leads to orders of magnitude accuracy and speedup improvements over many state-of-the-art Byzantine-resilient approaches.

Live content is unavailable. Log in and register to view live content