Skip to yearly menu bar Skip to main content


Poster
in
Workshop: OPT 2023: Optimization for Machine Learning

Stochastic Optimization under Hidden Convexity

Ilyas Fatkhullin · Niao He · Yifan Hu


Abstract: In this work, we consider stochastic non-convex constrained optimization problems under hidden convexity, i.e., those that admit a convex reformulation via a black box (non-linear, but invertible) map $c: \mathcal{X} \rightarrow \mathcal{U}$. A number of non-convex problems ranging from optimal control, revenue and inventory management, to convex reinforcement learning all admit such a hidden convex structure. Unfortunately, in the majority of considered applications, the map $c(\cdot)$ is unavailable and therefore, the reduction to solving a convex optimization is not possible. On the other hand, the (stochastic) gradients with respect to the original variable $x\in \mathcal{X}$ are often easy to obtain. Motivated by these observations, we consider the projected stochastic (sub-) gradient methods under hidden convexity and provide the first sample complexity guarantees for global convergence in smooth and non-smooth settings. Additionally, we improve our results to the last iterate function value convergence in the smooth setting using the momentum variant of projected stochastic gradient descent.

Chat is not available.