Skip to yearly menu bar Skip to main content


Poster

Collaborative Deep Learning in Fixed Topology Networks

Zhanhong Jiang · Aditya Balu · Chinmay Hegde · Soumik Sarkar

Pacific Ballroom #133

Keywords: [ Optimization for Deep Networks ] [ Non-Convex Optimization ] [ Convex Optimization ]


Abstract:

There is significant recent interest to parallelize deep learning algorithms in order to handle the enormous growth in data and model sizes. While most advances focus on model parallelization and engaging multiple computing agents via using a central parameter server, aspect of data parallelization along with decentralized computation has not been explored sufficiently. In this context, this paper presents a new consensus-based distributed SGD (CDSGD) (and its momentum variant, CDMSGD) algorithm for collaborative deep learning over fixed topology networks that enables data parallelization as well as decentralized computation. Such a framework can be extremely useful for learning agents with access to only local/private data in a communication constrained environment. We analyze the convergence properties of the proposed algorithm with strongly convex and nonconvex objective functions with fixed and diminishing step sizes using concepts of Lyapunov function construction. We demonstrate the efficacy of our algorithms in comparison with the baseline centralized SGD and the recently proposed federated averaging algorithm (that also enables data parallelism) based on benchmark datasets such as MNIST, CIFAR-10 and CIFAR-100.

Live content is unavailable. Log in and register to view live content