An Overview of GPU-Based First-Order Methods for Linear Programming and Beyond
Abstract
The rapid advancement of GPU computing has transformed numerous fields, yet its impact on mathematical programming, including linear programming (LP) and beyond, remains an actively evolving research area. In this lecture, we aim to provide an overview of recent developments in GPU-accelerated solvers, with a primary focus on first-order methods (FOMs) for LP and their extensions. We begin by outlining the fundamental differences between CPU and GPU architectures and presenting a numerical study on the speedup achieved by GPUs over CPUs for key numerical linear algebra operations that serve as bottlenecks for various optimization algorithms. Next, we discuss how to design FOMs for problems like LP from the ground up. We then introduce the newly developed LP solvers PDLP, cuPDLP, and cuPDLPx, describing practical enhancements that improve numerical performance, along with a summary of numerical studies comparing cuPDLPx to commercial solvers. This is followed by an in-depth discussion of the theoretical foundations of the primal-dual hybrid gradient (PDHG) method, which serves as the base algorithm for PDLP. We present a simplified and unified perspective on PDHG, including a basic convergence analysis, and explore its connections with non-expansive operators, leading to a principal understanding of the algorithm. Furthermore, we examine the sharpness property of LP, a key characteristic that enables the linear convergence of PDHG. We discuss optimal linear convergence, infeasibility detection, and Halpern acceleration, followed by a summary of condition-number-based analyses. Finally, we review recent progress in GPU-based solvers for quadratic and semidefinite programming, highlighting broader advancements in large-scale optimization with GPUs.