Timezone: »

A Linearly Convergent Proximal Gradient Algorithm for Decentralized Optimization
Sulaiman Alghunaim · Kun Yuan · Ali H Sayed

Tue Dec 10 05:30 PM -- 07:30 PM (PST) @ East Exhibition Hall B + C #24

Decentralized optimization is a powerful paradigm that finds applications in engineering and learning design. This work studies decentralized composite optimization problems with non-smooth regularization terms. Most existing gradient-based proximal decentralized methods are known to converge to the optimal solution with sublinear rates, and it remains unclear whether this family of methods can achieve global linear convergence. To tackle this problem, this work assumes the non-smooth regularization term is common across all networked agents, which is the case for many machine learning problems. Under this condition, we design a proximal gradient decentralized algorithm whose fixed point coincides with the desired minimizer. We then provide a concise proof that establishes its linear convergence. In the absence of the non-smooth term, our analysis technique covers the well known EXTRA algorithm and provides useful bounds on the convergence rate and step-size.

Author Information

Sulaiman Alghunaim (UCLA)
Kun Yuan (Alibaba Inc.)
Ali H Sayed (Ecole Polytechnique Fédérale de Lausanne)

A. H. Sayed is Dean of Engineering at EPFL, Switzerland, and principal investigator of the Adaptive Systems Laboratory. He has served as distinguished professor and chairman of electrical engineering at UCLA. An author/co-author of over 530 scholarly publications and six books, his research involves several areas including adaptation and learning theories, data and network sciences, statistical inference, and distributed optimization. He is recognized as a Highly Cited Researcher by Thomson Reuters and Clarivate Analytics, and is a member of the US National Academy of Engineering. He is serving as President of the IEEE Signal Processing Society.