Timezone: »

AutoSync: Learning to Synchronize for Data-Parallel Distributed Deep Learning
Hao Zhang · Yuan Li · Zhijie Deng · Xiaodan Liang · Lawrence Carin · Eric Xing

Wed Dec 09 09:00 AM -- 11:00 AM (PST) @ Poster Session 3 #1037

Synchronization is a key step in data-parallel distributed machine learning (ML). Different synchronization systems and strategies perform differently, and to achieve optimal parallel training throughput requires synchronization strategies that adapt to model structures and cluster configurations. Existing synchronization systems often only consider a single or a few synchronization aspects, and the burden of deciding the right synchronization strategy is then placed on the ML practitioners, who may lack the required expertise. In this paper, we develop a model- and resource-dependent representation for synchronization, which unifies multiple synchronization aspects ranging from architecture, message partitioning, placement scheme, to communication topology. Based on this representation, we build an end-to-end pipeline, AutoSync, to automatically optimize synchronization strategies given model structures and resource specifications, lowering the bar for data-parallel distributed ML. By learning from low-shot data collected in only 200 trial runs, AutoSync can discover synchronization strategies up to 1.6x better than manually optimized ones. We develop transfer-learning mechanisms to further reduce the auto-optimization cost -- the simulators can transfer among similar model architectures, among similar cluster configurations, or both. We also present a dataset that contains over 10000 synchronization strategies and run-time pairs on a diverse set of models and cluster specifications.

Author Information

Hao Zhang (Carnegie Mellon University, Petuum Inc.)
Yuan Li (Duke University)
Zhijie Deng (Tsinghua University)
Xiaodan Liang (Sun Yat-sen University)
Lawrence Carin (Duke University)
Eric Xing (Petuum Inc. / Carnegie Mellon University)

More from the Same Authors