Timezone: »

 
Poster
Dual Averaging Method for Regularized Stochastic Learning and Online Optimization
Lin Xiao

Mon Dec 07 07:00 PM -- 11:59 PM (PST) @ None #None

We consider regularized stochastic learning and online optimization problems, where the objective function is the sum of two convex terms: one is the loss function of the learning task, and the other is a simple regularization term such as L1-norm for sparsity. We develop a new online algorithm, the regularized dual averaging method, that can explicitly exploit the regularization structure in an online setting. In particular, at each iteration, the learning variables are adjusted by solving a simple optimization problem that involves the running average of all past subgradients of the loss functions and the whole regularization term, not just its subgradient. This method achieves the optimal convergence rate and often enjoys a low complexity per iteration similar as the standard stochastic gradient method. Computational experiments are presented for the special case of sparse online learning using L1-regularization.

Author Information

Lin Xiao (Microsoft Research)

More from the Same Authors