Accelerating Parametric Convex Optimization: Design and Verification of First-Order Methods
Abstract
We present statistical learning tools to analyze and design accelerated first-order methods for parametric convex optimization. We establish generalization guarantees for classical optimizers through sample convergence bounds and for learned optimizers using the Probably Approximately Correct (PAC)-Bayes framework. These guarantees depend on the iterate trajectories that we collect by modeling the iterations as computational graphs and running evaluations in parallel on GPUs. Building on this, we design accelerated first-order methods by tuning key algorithm parameters (such as gradient steps and warm-starts) with provable performance bounds that closely match their out-of-sample behavior. Together, these results provide a pathway toward automated algorithm design and verification, bringing together theoretical insights with empirical performance.