Timezone: »
Poster
A Faster Decentralized Algorithm for Nonconvex Minimax Problems
Wenhan Xian · Feihu Huang · Yanfu Zhang · Heng Huang
In this paper, we study the nonconvex-strongly-concave minimax optimization problem on decentralized setting. The minimax problems are attracting increasing attentions because of their popular practical applications such as policy evaluation and adversarial training. As training data become larger, distributed training has been broadly adopted in machine learning tasks. Recent research works show that the decentralized distributed data-parallel training techniques are specially promising, because they can achieve the efficient communications and avoid the bottleneck problem on the central node or the latency of low bandwidth network. However, the decentralized minimax problems were seldom studied in literature and the existing methods suffer from very high gradient complexity. To address this challenge, we propose a new faster decentralized algorithm, named as DM-HSGD, for nonconvex minimax problems by using the variance reduced technique of hybrid stochastic gradient descent. We prove that our DM-HSGD algorithm achieves stochastic first-order oracle (SFO) complexity of $O(\kappa^3 \epsilon^{-3})$ for decentralized stochastic nonconvex-strongly-concave problem to search an $\epsilon$-stationary point, which improves the exiting best theoretical results. Moreover, we also prove that our algorithm achieves linear speedup with respect to the number of workers. Our experiments on decentralized settings show the superior performance of our new algorithm.
Author Information
Wenhan Xian (University of Pittsburgh)
Feihu Huang (University of Pittsburgh)
Yanfu Zhang (University of Pittsburgh)
Heng Huang (University of Pittsburgh)
More from the Same Authors
-
2022 : FedGRec: Federated Graph Recommender System with Lazy Update of Latent Embeddings »
Junyi Li · Heng Huang -
2022 : Cooperation or Competition: Avoiding Player Domination for Multi-target Robustness by Adaptive Budgets »
Yimu Wang · Dinghuai Zhang · Yihan Wu · Heng Huang · Hongyang Zhang -
2022 Poster: MetricFormer: A Unified Perspective of Correlation Exploring in Similarity Learning »
Jiexi Yan · Erkun Yang · Cheng Deng · Heng Huang -
2022 Poster: Enhanced Bilevel Optimization via Bregman Distance »
Feihu Huang · Junyi Li · Shangqian Gao · Heng Huang -
2022 Poster: Accelerated Zeroth-Order and First-Order Momentum Methods from Mini to Minimax Optimization »
Feihu Huang · Shangqian Gao · Jian Pei · Heng Huang -
2021 Poster: Optimal Underdamped Langevin MCMC Method »
Zhengmian Hu · Feihu Huang · Heng Huang -
2021 Poster: Fast Training Method for Stochastic Compositional Optimization Problems »
Hongchang Gao · Heng Huang -
2021 Poster: SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients »
Feihu Huang · Junyi Li · Heng Huang -
2021 Poster: Efficient Mirror Descent Ascent Methods for Nonsmooth Minimax Problems »
Feihu Huang · Xidong Wu · Heng Huang -
2019 Poster: Curvilinear Distance Metric Learning »
Shuo Chen · Lei Luo · Jian Yang · Chen Gong · Jun Li · Heng Huang -
2018 Poster: Bilevel Distance Metric Learning for Robust Image Recognition »
Jie Xu · Lei Luo · Cheng Deng · Heng Huang -
2018 Poster: Training Neural Networks Using Features Replay »
Zhouyuan Huo · Bin Gu · Heng Huang -
2018 Spotlight: Training Neural Networks Using Features Replay »
Zhouyuan Huo · Bin Gu · Heng Huang -
2017 Poster: Group Sparse Additive Machine »
Hong Chen · Xiaoqian Wang · Cheng Deng · Heng Huang -
2017 Poster: Regularized Modal Regression with Applications in Cognitive Impairment Prediction »
Xiaoqian Wang · Hong Chen · Weidong Cai · Dinggang Shen · Heng Huang -
2017 Poster: Learning A Structured Optimal Bipartite Graph for Co-Clustering »
Feiping Nie · Xiaoqian Wang · Cheng Deng · Heng Huang