Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Federated Learning: Recent Advances and New Challenges

Stochastic Gradient Methods with Compressed Communication for Decentralized Saddle Point Problems

Chhavi Sharma · Vishnu Narayanan · Balamurugan Palaniappan


Abstract: We develop two compression based stochastic gradient algorithms to solve a class of non-smooth strongly convex-strongly concave saddle-point problems in a decentralized setting (without a central server). Our first algorithm is a Restart-based Decentralized Proximal Stochastic Gradient method with Compression (C-RDPSG) for general stochastic settings. We provide rigorous theoretical guarantees of C-RDPSG with gradient computation complexity and communication complexity of order O((1+δ)41L2κf2κg21ϵ), to achieve an ϵ-accurate saddle-point solution, where δ denotes the compression factor, κf and κg denote respectively the condition numbers of objective function and communication graph, and L denotes the smoothness parameter of the smooth part of the objective function. Next, we present a Decentralized Proximal Stochastic Variance Reduced Gradient algorithm with Compression (C-DPSVRG) for finite sum setting which exhibits gradient computation complexity and communication complexity of order O((1+δ)max{κf2,δκf2κg,κg}log(1ϵ)). Extensive numerical experiments show competitive performance of the proposed algorithms and provide support to the theoretical results obtained.

Chat is not available.