Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Federated Learning: Recent Advances and New Challenges

Refined Convergence and Topology Learning for Decentralized Optimization with Heterogeneous Data

Batiste Le bars · AurĂ©lien Bellet · Marc Tommasi · Erick Lavoie · Anne-marie Kermarrec


Abstract:

One of the key challenges in decentralized and federated learning is to design algorithms that efficiently deal with highly heterogeneous data distributions across agents. In this paper, we revisit the analysis of Decentralized Stochastic Gradient Descent algorithm (D-SGD) under data heterogeneity. We first exhibit the key role played by a new quantity, called neighborhood heterogeneity, on the convergence rate of D-SGD. Neighborhood heterogeneity provides a natural criterion to learn data-dependent and sparse topologies that reduce the detrimental effect of data heterogeneity on the convergence of D-SGD. For the important case of classification with label skew, we formulate the problem of learning a topology as a tractable optimization problem that we solve with a Frank-Wolfe algorithm. As illustrated over a set of experiments, the learned sparse topology is showed to balance the convergence speed and the per-iteration communication costs of D-SGD.

Chat is not available.