Timezone: »

 
Poster
Minibatch vs Local SGD for Heterogeneous Distributed Learning
Blake Woodworth · Kumar Kshitij Patel · Nati Srebro

Wed Dec 09 09:00 AM -- 11:00 AM (PST) @ Poster Session 3 #1140

We analyze Local SGD (aka parallel or federated SGD) and Minibatch SGD in the heterogeneous distributed setting, where each machine has access to stochastic gradient estimates for a different, machine-specific, convex objective; the goal is to optimize w.r.t.~the average objective; and machines can only communicate intermittently. We argue that, (i) Minibatch SGD (even without acceleration) dominates all existing analysis of Local SGD in this setting, (ii) accelerated Minibatch SGD is optimal when the heterogeneity is high, and (iii) present the first upper bound for Local SGD that improves over Minibatch SGD in a non-homogeneous regime.

Author Information

Blake Woodworth (TTIC)
Kumar Kshitij Patel (Toyota Technological Institute at Chicago)
Nati Srebro (TTI-Chicago)

More from the Same Authors