Skip to yearly menu bar Skip to main content


Poster

Ordered Momentum for Asynchronous SGD

Chang-Wei Shi · Yi-Rui Yang · Wu-Jun Li

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Distributed learning is indispensable for training large-scale deep models.Asynchronous SGD (ASGD) and its variants are commonly used distributed learning methods in many scenarios where the computing capabilities of workers in the cluster are heterogeneous.Momentum has been acknowledged for its benefits in both optimization and generalization in deep model training. However, existing works have found that naively incorporating momentum into ASGD can impede the convergence.In this paper, we propose a novel method, called ordered momentum (OrMo), for ASGD. In OrMo, momentum is incorporated into ASGD by organizing the gradients in order based on their iteration indexes. We theoretically prove the convergence of OrMo for non-convex problems. To the best of our knowledge, this is the first work to establish the convergence analysis of ASGD with momentum without relying on the bounded delay assumption. Empirical results demonstrate that OrMo can achieve better convergence performance compared with ASGD and other asynchronous methods with momentum.

Live content is unavailable. Log in and register to view live content