Timezone: »
Poster
Practical Large-Scale Linear Programming using Primal-Dual Hybrid Gradient
David Applegate · Mateo Diaz · Oliver Hinder · Haihao Lu · Miles Lubin · Brendan O'Donoghue · Warren Schudy
We present PDLP, a practical first-order method for linear programming (LP) that can solve to the high levels of accuracy that are expected in traditional LP applications. In addition, it can scale to very large problems because its core operation is matrix-vector multiplications. PDLP is derived by applying the primal-dual hybrid gradient (PDHG) method, popularized by Chambolle and Pock (2011), to a saddle-point formulation of LP. PDLP enhances PDHG for LP by combining several new techniques with older tricks from the literature; the enhancements include diagonal preconditioning, presolving, adaptive step sizes, and adaptive restarting. PDLP improves the state of the art for first-order methods applied to LP. We compare PDLP with SCS, an ADMM-based solver, on a set of 383 LP instances derived from MIPLIB 2017. With a target of $10^{-8}$ relative accuracy and 1 hour time limit, PDLP achieves a 6.3x reduction in the geometric mean of solve times and a 4.6x reduction in the number of instances unsolved (from 227 to 49). Furthermore, we highlight standard benchmark instances and a large-scale application (PageRank) where our open-source prototype of PDLP, written in Julia, outperforms a commercial LP solver.
Author Information
David Applegate (Google Research)
David Applegate is a Research Scientist at Google NYC, in the Large Scale Optimization group within the Algorithms and Optimization team. He came to Google in 2016, after 16 years as a Lead Member of Technical Staff in the AT&T Shannon Research Laboratory. Prior to that he spent 4 years as an Associate Professor at Rice University and 4 years as a Member of Technical Staff in AT&T Bell Labs. He has a Ph.D. in Computer Science from Carnegie Mellon (1991) and a B.S. in Computer Science and Math from the University of Dayton (1984).
Mateo Diaz (Cornell University)
Oliver Hinder (University of Pittsburgh)
Haihao Lu (University of Chicago)
Miles Lubin (Google)
Brendan O'Donoghue (DeepMind)
Warren Schudy (Google)
More from the Same Authors
-
2021 Spotlight: Variational Bayesian Optimistic Sampling »
Brendan O'Donoghue · Tor Lattimore -
2021 Spotlight: Reward is enough for convex MDPs »
Tom Zahavy · Brendan O'Donoghue · Guillaume Desjardins · Satinder Singh -
2022 Poster: Stars: Tera-Scale Graph Building for Clustering and Learning »
CJ Carey · Jonathan Halcrow · Rajesh Jayaram · Vahab Mirrokni · Warren Schudy · Peilin Zhong -
2022 Poster: A consistently adaptive trust-region method »
Fadi Hamad · Oliver Hinder -
2021 : Contributed talks in Session 3 (Zoom) »
Oliver Hinder · Wenhao Zhan · Akhilesh Soni · Grigory Malinovsky · Boyue Li -
2021 : Opening Remarks to Session 3 »
Oliver Hinder -
2021 Workshop: OPT 2021: Optimization for Machine Learning »
Courtney Paquette · Quanquan Gu · Oliver Hinder · Katya Scheinberg · Sebastian Stich · Martin Takac -
2021 Poster: Reward is enough for convex MDPs »
Tom Zahavy · Brendan O'Donoghue · Guillaume Desjardins · Satinder Singh -
2021 Poster: Variational Bayesian Optimistic Sampling »
Brendan O'Donoghue · Tor Lattimore -
2021 Poster: Variational Bayesian Reinforcement Learning with Regret Bounds »
Brendan O'Donoghue -
2021 Poster: Synthetic Design: An Optimization Approach to Experimental Design with Synthetic Controls »
Nick Doudchenko · Khashayar Khosravi · Jean Pouget-Abadie · Sébastien Lahaie · Miles Lubin · Vahab Mirrokni · Jann Spiess · guido imbens -
2020 Poster: Conic Descent and its Application to Memory-efficient Optimization over Positive Semidefinite Matrices »
John Duchi · Oliver Hinder · Andrew Naber · Yinyu Ye -
2020 Poster: An efficient nonconvex reformulation of stagewise convex optimization problems »
Rudy Bunel · Oliver Hinder · Srinadh Bhojanapalli · Krishnamurthy Dvijotham -
2020 Poster: Efficient Clustering for Stretched Mixtures: Landscape and Optimality »
Kaizheng Wang · Yuling Yan · Mateo Diaz -
2020 Poster: Contextual Reserve Price Optimization in Auctions via Mixed Integer Programming »
Joey Huchette · Haihao Lu · Hossein Esfandiari · Vahab Mirrokni -
2019 : Poster Session 1 »
Hongzi Mao · Vikram Nathan · Ioana Baldini · Viswanath Sivakumar · Haonan Wang · Vinoj Yasanga Jayasundara Magalle Hewa · Zhan Shi · Samuel Kaufman · Joyce Fang · Giulio Zhou · Jialin Ding · Hao He · Miles Lubin -
2019 Poster: Hamiltonian descent for composite objectives »
Brendan O'Donoghue · Chris Maddison -
2019 Poster: Variance Reduction in Bipartite Experiments through Correlation Clustering »
Jean Pouget-Abadie · Kevin Aydin · Warren Schudy · Kay Brodersen · Vahab Mirrokni