Timezone: »
Stochastic composite mirror descent (SCMD) is a simple and efficient method able to capture both geometric and composite structures of optimization problems in machine learning. Existing strategies require to take either an average or a random selection of iterates to achieve optimal convergence rates, which, however, can either destroy the sparsity of solutions or slow down the practical training speed. In this paper, we propose a theoretically sound strategy to select an individual iterate of the vanilla SCMD, which is able to achieve optimal rates for both convex and strongly convex problems in a non-smooth learning setting. This strategy of outputting an individual iterate can preserve the sparsity of solutions which is crucial for a proper interpretation in sparse learning problems. We report experimental comparisons with several baseline methods to show the effectiveness of our method in achieving a fast training speed as well as in outputting sparse solutions.
Author Information
Yunwen Lei (Technical University of Kaiserslautern)
Peng Yang (Southern University of Science and Technology)
Ke Tang (Southern University of Science and Technology)
Ding-Xuan Zhou (City University of Hong Kong)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Optimal Stochastic and Online Learning with Individual Iterates »
Thu. Dec 12th 01:00 -- 03:00 AM Room East Exhibition Hall B + C #164
More from the Same Authors
-
2023 Poster: Toward Better PAC-Bayes Bounds for Uniformly Stable Algorithms »
Sijia Zhou · Yunwen Lei · Ata Kaban -
2022 Spotlight: A Communication-Efficient Distributed Gradient Clipping Algorithm for Training Deep Neural Networks »
Mingrui Liu · Zhenxun Zhuang · Yunwen Lei · Chunyang Liao -
2022 Poster: A Communication-Efficient Distributed Gradient Clipping Algorithm for Training Deep Neural Networks »
Mingrui Liu · Zhenxun Zhuang · Yunwen Lei · Chunyang Liao -
2022 Poster: Stability and Generalization Analysis of Gradient Methods for Shallow Neural Networks »
Yunwen Lei · Rong Jin · Yiming Ying -
2022 Poster: Stability and Generalization for Markov Chain Stochastic Gradient Methods »
Puyu Wang · Yunwen Lei · Yiming Ying · Ding-Xuan Zhou -
2018 Poster: Stochastic Composite Mirror Descent: Optimal Bounds with High Probabilities »
Yunwen Lei · Ke Tang -
2017 Poster: Log-normality and Skewness of Estimated State/Action Values in Reinforcement Learning »
Liangpeng Zhang · Ke Tang · Xin Yao -
2017 Poster: Subset Selection under Noise »
Chao Qian · Jing-Cheng Shi · Yang Yu · Ke Tang · Zhi-Hua Zhou -
2015 Poster: Multi-class SVMs: From Tighter Data-Dependent Generalization Bounds to Novel Algorithms »
Yunwen Lei · Urun Dogan · Alexander Binder · Marius Kloft