Skip to yearly menu bar Skip to main content

Workshop: Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023 (FL@FM-NeurIPS'23)

LASER: Linear Compression in Wireless Distributed Optimization

Ashok Vardhan Makkuva · Marco Bondaschi · Thijs Vogels · Martin Jaggi · Hyeji Kim · Michael Gastpar

Keywords: [ gradient compression ] [ wireless communication ] [ federated learning ] [ Distributed Optimization ] [ Deep Learning ]

[ ] [ Project Page ]
Sat 16 Dec 2:05 p.m. PST — 2:15 p.m. PST

Abstract: Data-parallel SGD is the de facto algorithm for distributed optimization, especially for large scale machine learning. Despite its merits, communication bottleneck is one of its persistent issues. Most compression schemes to alleviate this either assume noiseless communication links, or fail to achieve good performance on practical tasks. In this paper, we close this gap and introduce $\bf{LASER}$: ${\bf L}$ine${\bf A}$r Compre${\bf S}$sion in Wir${\bf E}$less Dist${\bf R}$ibuted Optimization. LASER capitalizes on the inherent low-rank structure of gradients and transmits them efficiently over the noisy channels. Whilst enjoying theoretical guarantees similar to those of the classical SGD, \textsc{LASER} shows consistent gains over baselines on a variety of practical benchmarks. In particular, it outperforms the state-of-the-art compression schemes on challenging computer vision and GPT language modeling tasks. On the latter, we obtain $50$-$64$ % improvement in perplexity over our baselines for noisy channels.

Chat is not available.