Timezone: »
Large neural network models are commonly trained through a combination of advanced parallelism strategies in a single program, multiple data (SPMD) paradigm. For example, training large transformer models requires combining data, model, and pipeline partitioning; and optimizer sharding techniques. However, identifying efficient combinations for many model architectures and accelerator systems requires significant manual analysis. In this work, we present an automatic partitioner that identifies these combinations through a goal-oriented search. Our key findings are that a Monte Carlo Tree Search-based partitioner leveraging partition-specific compiler analysis directly into the search and guided goals matches expert-level strategies for various models.
Author Information
Sami Alabed (University of Cambridge)
Dominik Grewe
Juliana Franco
Bart Chrzaszcz
Tom Natan
Tamara Norman
Norman Rink (DeepMind)
Dimitrios Vytiniotis (DeepMind)
Michael Schaarschmidt (Isomorphic Labs)
More from the Same Authors
-
2022 : Pre-training via Denoising for Molecular Property Prediction »
Sheheryar Zaidi · Michael Schaarschmidt · James Martens · Hyunjik Kim · Yee Whye Teh · Alvaro Sanchez Gonzalez · Peter Battaglia · Razvan Pascanu · Jonathan Godwin -
2019 : The Differentiable Curry »
Dimitrios Vytiniotis