Timezone: »
Poster
Scalable Belief Propagation via Relaxed Scheduling
Vitalii Aksenov · Dan Alistarh · Janne H. Korhonen
The ability to leverage large-scale hardware parallelism has been one of the key enablers of the accelerated recent progress in machine learning. Consequently, there has been considerable effort invested into developing efficient parallel variants of classic machine learning algorithms. However, despite the wealth of knowledge on parallelization, some classic machine learning algorithms often prove hard to parallelize efficiently while maintaining convergence.
In this paper, we focus on efficient parallel algorithms for the key machine learning task of inference on graphical models, in particular on the fundamental belief propagation algorithm. We address the challenge of efficiently parallelizing this classic paradigm by showing how to leverage scalable relaxed schedulers, which reduce parallelization overheads, in this context. We investigate the overheads of relaxation analytically, and present an extensive empirical study, showing that our approach outperforms previous parallel belief propagation implementations both in terms of scalability and in terms of wall-clock convergence time, on a range of practical applications.
Author Information
Vitalii Aksenov (ITMO University)
Dan Alistarh (IST Austria & Neural Magic Inc.)
Janne H. Korhonen (IST Austria)
More from the Same Authors
-
2023 Poster: Knowledge Distillation Performs Partial Variance Reduction »
Mher Safaryan · Alexandra Peste · Dan Alistarh -
2023 Poster: ZipLM: Inference-Aware Structured Pruning of Language Models »
Eldar Kurtić · Elias Frantar · Dan Alistarh -
2023 Poster: CAP: Correlation-Aware Pruning for Highly-Accurate Sparse Vision Models »
Denis Kuznedelev · Eldar Kurtić · Elias Frantar · Dan Alistarh -
2021 Poster: Towards Tight Communication Lower Bounds for Distributed Optimisation »
Janne H. Korhonen · Dan Alistarh -
2020 Poster: Adaptive Gradient Quantization for Data-Parallel SGD »
Fartash Faghri · Iman Tabrizian · Ilia Markov · Dan Alistarh · Daniel Roy · Ali Ramezani-Kebrya -
2020 Poster: WoodFisher: Efficient Second-Order Approximation for Neural Network Compression »
Sidak Pal Singh · Dan Alistarh -
2020 Expo Demonstration: Using Sparse Quantization for Efficient Inference on Deep Neural Networks »
Mark J Kurtz · Dan Alistarh · Saša Zelenović -
2018 Poster: The Convergence of Sparsified Gradient Methods »
Dan Alistarh · Torsten Hoefler · Mikael Johansson · Nikola Konstantinov · Sarit Khirirat · Cedric Renggli -
2018 Poster: Byzantine Stochastic Gradient Descent »
Dan Alistarh · Zeyuan Allen-Zhu · Jerry Li