Skip to yearly menu bar Skip to main content


Poster
in
Workshop: OPT 2021: Optimization for Machine Learning

DAdaQuant: Doubly-adaptive quantization for communication-efficient Federated Learning

Robert Hönig · Yiren Zhao · Robert Mullins


Abstract:

Federated Learning (FL) is a powerful technique to train a model on a server with data from several clients in a privacy-preserving manner. FL incurs significant communication costs because it repeatedly transmits the model between the server and clients. Recently proposed algorithms quantize the model parameters to efficiently compress FL communication. We find that dynamic adaptations of the quantization level can boost compression without sacrificing model quality. We introduce DAdaQuant as a doubly-adaptive quantization algorithm that dynamically changes the quantization level across time and different clients. Our experiments show that DAdaQuant consistently improves client-->server compression, outperforming the strongest non-adaptive baselines by up to 2.8x.

Chat is not available.