Timezone: »

Shifted Compression Framework: Generalizations and Improvements
Egor Shulgin · Peter Richtarik

Communication is one of the key bottlenecks in the distributed training of large-scale machine learning models, and lossy compression of exchanged information, such as stochastic gradients or models, is one of the most effective instruments to alleviate this issue. Among the most studied compression techniques is the class of unbiased compression operators with variance bounded by a multiple of the square norm of the vector we wish to compress. By design, this variance may remain high, and only diminishes if the input vector approaches zero. However, unless the model being trained is overparameterized, there is no a-priori reason for the vectors we wish to compress to approach zero during the iterations of classical methods such as distributed compressed SGD, which has adverse effects on the convergence speed. Due to this issue, several more elaborate and seemingly very different algorithms have been proposed recently, with the goal of circumventing this issue. These methods are based on the idea of compressing the difference between the vector we would normally wish to compress and some auxiliary vector which changes throughout the iterative process. In this work we take a step back, and develop a unified framework for studying such methods, conceptually, and theoretically. Our framework incorporates methods compressing both gradients and models, using unbiased and biased compressors, and sheds light on the construction of the auxiliary vectors. Furthermore, our general framework can lead to the improvement of several existing algorithms, and can produce new algorithms. Finally, we performed several numerical experiments which illustrate and support our theoretical findings.

Author Information

Egor Shulgin (Samsung AI Cambridge, King Abdullah University of Science and Technology)

I am a PhD student in Computer Science at King Abdullah University of Science and Technology (KAUST) advised by [Peter Richtárik](https://richtarik.org/). Prior to that, I obtained BSc in Applied Mathematics, Computer Science, and Physics from Moscow Institute of Physics and Technology in 2019.

Peter Richtarik (KAUST)

More from the Same Authors