Skip to yearly menu bar Skip to main content


Poster
in
Workshop: OPT 2022: Optimization for Machine Learning

Decentralized Stochastic Optimization with Client Sampling

Ziwei Liu · Anastasiia Koloskova · Martin Jaggi · Tao Lin


Abstract: Decentralized optimization is a key setting toward enabling data privacy and on-device learning over networks.Existing research primarily focuses on distributing the objective function across $n$ nodes/clients, lagging behind the real-world challenges such as i) node availability---not all $n$ nodes are always available during the optimization---and ii) slow information propagation (caused by a large number of nodes $n$). In this work, we study Decentralized Stochastic Gradient Descent (D-SGD) with node subsampling, i.e. when only $s~(s \leq n)$ nodes are randomly sampled out of $n$ nodes per iteration.We provide the theoretical convergence rates in smooth (convex and non-convex) problems with heterogeneous (non-identically distributed data) functions.Our theoretical results capture the effect of node subsampling and choice of the topology on the sampled nodes, through a metric termed \emph{the expected consensus rate}.On a number of common topologies, including ring and torus, we theoretically and empirically demonstrate the effectiveness of such a metric.

Chat is not available.