Timezone: »
We present and study a distributed optimization algorithm by employing a stochastic dual coordinate ascent method. Stochastic dual coordinate ascent methods enjoy strong theoretical guarantees and often have better performances than stochastic gradient descent methods in optimizing regularized loss minimization problems. It still lacks of efforts in studying them in a distributed framework. We make a progress along the line by presenting a distributed stochastic dual coordinate ascent algorithm in a star network, with an analysis of the tradeoff between computation and communication. We verify our analysis by experiments on real data sets. Moreover, we compare the proposed algorithm with distributed stochastic gradient descent methods and distributed alternating direction methods of multipliers for optimizing SVMs in the same distributed framework, and observe competitive performances.
Author Information
Tianbao Yang (NEC Labs America)
More from the Same Authors
-
2012 Poster: Nystr{รถ}m Method vs Random Fourier Features: A Theoretical and Empirical Comparison »
Tianbao Yang · Yu-Feng Li · Mehrdad Mahdavi · Rong Jin · Zhi-Hua Zhou -
2012 Poster: Stochastic Gradient Descent with Only One Projection »
Mehrdad Mahdavi · Tianbao Yang · Rong Jin · Shenghuo Zhu