Global Solutions to Non-Convex Functional Constrained Problems via Hidden Convexity
Abstract
Constrained non-convex optimization is fundamentally challenging, as global solutions are generally intractable and constraint qualifications may not hold. However, in many applications--including safe policy optimization in control and reinforcement learning, as well as revenue and inventory management--such problems possess hidden convexity, meaning they can be reformulated as convex programs via a nonlinear invertible transformation c(.). Typically these transformations are implicit or unknown, making the direct reformulation infeasible. Nevertheless, (sub-)gradients in the original variables are often accessible or can be estimated, which motivates standard algorithms that operate directly in the non-convex domain. In this work, we develop the first algorithms that provably solve such hidden convex constrained problems to global optimality using only standard (sub-)gradient oracle access. Remarkably, despite the non-convexity, our algorithms require no constraint qualifications and achieve complexities matching those for unconstrained hidden convex optimization.