Poster
in
Workshop: 3rd Workshop on New Frontiers in Adversarial Machine Learning (AdvML-Frontiers)
dSTAR: Straggler Tolerant and Byzantine Resilient Distributed SGD
Jiahe Yan · Pratik Chaudhari · Leonard Kleinrock
Keywords: [ Byzantine resilience ] [ SGD ] [ byzantine attack ] [ gradient aggregation rule ]
Abstract:
Distributed model training needs to be adapted to challenges such as the straggler effect and Byzantine attacks. When coordinating the training process with multiple computing nodes, ensuring timely and reliable gradient aggregation amidst network and system malfunctions is essential. To tackle these issues, we propose dSTARdSTAR, a lightweight and efficient approach for distributed stochastic gradient descent (SGD) that enhances robustness and convergence. dSTARdSTAR selectively aggregates gradients by collecting updates from the first kk workers to respond, filtering them based on deviations calculated using an ensemble median. This method not only mitigates the impact of stragglers but also fortifies the model against Byzantine adversaries. We theoretically establish that dSTARdSTAR is (α,f)(α,f)-Byzantine resilient and achieves a linear convergence rate. Empirical evaluations across various scenarios demonstrate that dSTARdSTAR consistently maintains high accuracy, outperforming other Byzantine-resilient methods that often suffer up to a 40-50\% accuracy drop under attack. Our results highlight dSTARdSTAR as a robust solution for training models in distributed environments prone to both straggler delays and Byzantine faults.
Chat is not available.