Timezone: »
Conditional stochastic optimization covers a variety of applications ranging from invariant learning and causal inference to meta-learning. However, constructing unbiased gradient estimators for such problems is challenging due to the composition structure. As an alternative, we propose a biased stochastic gradient descent (BSGD) algorithm and study the bias-variance tradeoff under different structural assumptions. We establish the sample complexities of BSGD for strongly convex, convex, and weakly convex objectives under smooth and non-smooth conditions. Our lower bound analysis shows that the sample complexities of BSGD cannot be improved for general convex objectives and nonconvex objectives except for smooth nonconvex objectives with Lipschitz continuous gradient estimator. For this special setting, we propose an accelerated algorithm called biased SpiderBoost (BSpiderBoost) that matches the lower bound complexity. We further conduct numerical experiments on invariant logistic regression and model-agnostic meta-learning to illustrate the performance of BSGD and BSpiderBoost.
Author Information
Yifan Hu (University of Illinois at Urbana-Champaign)
Siqi Zhang (University of Illinois at Urbana-Champaign)
Xin Chen (University of Illinois at Urbana-Champaign)
Niao He (UIUC)
More from the Same Authors
-
2020 Poster: A Catalyst Framework for Minimax Optimization »
Junchi Yang · Siqi Zhang · Negar Kiyavash · Niao He -
2020 Poster: Global Convergence and Variance Reduction for a Class of Nonconvex-Nonconcave Minimax Problems »
Junchi Yang · Negar Kiyavash · Niao He -
2020 Poster: A Unified Switching System Perspective and Convergence Analysis of Q-Learning Algorithms »
Donghwan Lee · Niao He -
2020 Poster: The Devil is in the Detail: A Framework for Macroscopic Prediction via Microscopic Models »
Yingxiang Yang · Negar Kiyavash · Le Song · Niao He -
2020 Poster: The Mean-Squared Error of Double Q-Learning »
Wentao Weng · Harsh Gupta · Niao He · Lei Ying · R. Srikant -
2020 Spotlight: The Devil is in the Detail: A Framework for Macroscopic Prediction via Microscopic Models »
Yingxiang Yang · Negar Kiyavash · Le Song · Niao He -
2019 Workshop: Bridging Game Theory and Deep Learning »
Ioannis Mitliagkas · Gauthier Gidel · Niao He · Reyhane Askari Hemmat · N H · Nika Haghtalab · Simon Lacoste-Julien -
2019 Workshop: The Optimization Foundations of Reinforcement Learning »
Bo Dai · Niao He · Nicolas Le Roux · Lihong Li · Dale Schuurmans · Martha White -
2019 Poster: Exponential Family Estimation via Adversarial Dynamics Embedding »
Bo Dai · Zhen Liu · Hanjun Dai · Niao He · Arthur Gretton · Le Song · Dale Schuurmans -
2019 Poster: Learning Positive Functions with Pseudo Mirror Descent »
Yingxiang Yang · Haoxiang Wang · Negar Kiyavash · Niao He -
2019 Spotlight: Learning Positive Functions with Pseudo Mirror Descent »
Yingxiang Yang · Haoxiang Wang · Negar Kiyavash · Niao He -
2018 Poster: Coupled Variational Bayes via Optimization Embedding »
Bo Dai · Hanjun Dai · Niao He · Weiyang Liu · Zhen Liu · Jianshu Chen · Lin Xiao · Le Song -
2018 Poster: Predictive Approximate Bayesian Computation via Saddle Points »
Yingxiang Yang · Bo Dai · Negar Kiyavash · Niao He -
2018 Poster: Quadratic Decomposable Submodular Function Minimization »
Pan Li · Niao He · Olgica Milenkovic -
2017 Poster: Online Learning for Multivariate Hawkes Processes »
Yingxiang Yang · Jalal Etesami · Niao He · Negar Kiyavash -
2016 Workshop: OPT 2016: Optimization for Machine Learning »
Suvrit Sra · Francis Bach · Sashank J. Reddi · Niao He