Timezone: »
In this paper, we propose a practical online method for solving a class of distributional robust optimization (DRO) with non-convex objectives, which has important applications in machine learning for improving the robustness of neural networks. In the literature, most methods for solving DRO are based on stochastic primal-dual methods. However, primal-dual methods for DRO suffer from several drawbacks: (1) manipulating a high-dimensional dual variable corresponding to the size of data is time expensive; (2) they are not friendly to online learning where data is coming sequentially. To address these issues, we consider a class of DRO with an KL divergence regularization on the dual variables, transform the min-max problem into a compositional minimization problem, and propose practical duality-free online stochastic methods without requiring a large mini-batch size. We establish the state-of-the-art complexities of the proposed methods with and without a Polyak-Łojasiewicz (PL) condition of the objective. Empirical studies on large-scale deep learning tasks (i) demonstrate that our method can speed up the training by more than 2 times than baseline methods and save days of training time on a large-scale dataset with ∼ 265K images, and (ii) verify the supreme performance of DRO over Empirical Risk Minimization (ERM) on imbalanced datasets. Of independent interest, the proposed method can be also used for solving a family of stochastic compositional problems with state-of-the-art complexities.
Author Information
Qi Qi (University of Iowa)
Zhishuai Guo (University of Iowa)
Yi Xu (Alibaba Group U.S. Inc.)
Rong Jin (Alibaba)
Tianbao Yang (The University of Iowa)
More from the Same Authors
-
2021 : Practice-Consistent Analysis of Adam-Style Methods »
Zhishuai Guo · Yi Xu · Wotao Yin · Rong Jin · Tianbao Yang -
2021 : A Stochastic Momentum Method for Min-max Bilevel Optimization »
Quanqi Hu · Bokun Wang · Tianbao Yang -
2021 : A Unified DRO View of Multi-class Loss Functions with top-N Consistency »
Dixian Zhu · Tianbao Yang -
2021 : Deep AUC Maximization for Medical Image Classification: Challenges and Opportunities »
Tianbao Yang -
2021 : Poster Session 2 (gather.town) »
Wenjie Li · Akhilesh Soni · Jinwuk Seok · Jianhao Ma · Jeffery Kline · Mathieu Tuli · Miaolan Xie · Robert Gower · Quanqi Hu · Matteo Cacciola · Yuanlu Bai · Boyue Li · Wenhao Zhan · Shentong Mo · Junhyung Lyle Kim · Sajad Fathi Hafshejani · Chris Junchi Li · Zhishuai Guo · Harshvardhan Harshvardhan · Neha Wadia · Tatjana Chavdarova · Difan Zou · Zixiang Chen · Aman Gupta · Jacques Chen · Betty Shea · Benoit Dherin · Aleksandr Beznosikov -
2021 Poster: Simple Stochastic and Online Gradient Descent Algorithms for Pairwise Learning »
ZHENHUAN YANG · Yunwen Lei · Puyu Wang · Tianbao Yang · Yiming Ying -
2021 Poster: Revisiting Smoothed Online Learning »
Lijun Zhang · Wei Jiang · Shiyin Lu · Tianbao Yang -
2021 Poster: Stochastic Optimization of Areas Under Precision-Recall Curves with Provable Convergence »
Qi Qi · Youzhi Luo · Zhao Xu · Shuiwang Ji · Tianbao Yang -
2021 Poster: Online Convex Optimization with Continuous Switching Constraint »
Guanghui Wang · Yuanyu Wan · Tianbao Yang · Lijun Zhang -
2020 Poster: Improved Schemes for Episodic Memory-based Lifelong Learning »
Yunhui Guo · Mingrui Liu · Tianbao Yang · Tajana S Rosing -
2020 Spotlight: Improved Schemes for Episodic Memory-based Lifelong Learning »
Yunhui Guo · Mingrui Liu · Tianbao Yang · Tajana S Rosing -
2020 Poster: A Decentralized Parallel Algorithm for Training Generative Adversarial Nets »
Mingrui Liu · Wei Zhang · Youssef Mroueh · Xiaodong Cui · Jarret Ross · Tianbao Yang · Payel Das -
2020 Poster: Optimal Epoch Stochastic Gradient Descent Ascent Methods for Min-Max Optimization »
Yan Yan · Yi Xu · Qihang Lin · Wei Liu · Tianbao Yang -
2019 Poster: XNAS: Neural Architecture Search with Expert Advice »
Niv Nayman · Asaf Noy · Tal Ridnik · Itamar Friedman · Rong Jin · Lihi Zelnik -
2019 Poster: Non-asymptotic Analysis of Stochastic Methods for Non-Smooth Non-Convex Regularized Problems »
Yi Xu · Rong Jin · Tianbao Yang -
2019 Poster: Stagewise Training Accelerates Convergence of Testing Error Over SGD »
Zhuoning Yuan · Yan Yan · Rong Jin · Tianbao Yang -
2018 : Poster spotlight »
Tianbao Yang · Pavel Dvurechenskii · Panayotis Mertikopoulos · Hugo Berard -
2018 Poster: First-order Stochastic Algorithms for Escaping From Saddle Points in Almost Linear Time »
Yi Xu · Rong Jin · Tianbao Yang -
2018 Poster: Adaptive Negative Curvature Descent with Applications in Non-convex Optimization »
Mingrui Liu · Zhe Li · Xiaoyu Wang · Jinfeng Yi · Tianbao Yang -
2018 Poster: Faster Online Learning of Optimal Threshold for Consistent F-measure Optimization »
Xiaoxuan Zhang · Mingrui Liu · Xun Zhou · Tianbao Yang -
2018 Poster: Fast Rates of ERM and Stochastic Approximation: Adaptive to Error Bound Conditions »
Mingrui Liu · Xiaoxuan Zhang · Lijun Zhang · Rong Jin · Tianbao Yang -
2017 Poster: ADMM without a Fixed Penalty Parameter: Faster Convergence with New Adaptive Penalization »
Yi Xu · Mingrui Liu · Qihang Lin · Tianbao Yang -
2017 Poster: Improved Dynamic Regret for Non-degenerate Functions »
Lijun Zhang · Tianbao Yang · Jinfeng Yi · Rong Jin · Zhi-Hua Zhou -
2017 Poster: Adaptive Accelerated Gradient Converging Method under H\"{o}lderian Error Bound Condition »
Mingrui Liu · Tianbao Yang -
2017 Poster: Adaptive SVRG Methods under Error Bound Conditions with Unknown Growth Parameter »
Yi Xu · Qihang Lin · Tianbao Yang -
2016 Poster: Homotopy Smoothing for Non-Smooth Problems with Lower Complexity than $O(1/\epsilon)$ »
Yi Xu · Yan Yan · Qihang Lin · Tianbao Yang -
2016 Poster: Improved Dropout for Shallow and Deep Learning »
Zhe Li · Boqing Gong · Tianbao Yang