Timezone: »
Semidefinite programming (SDP) is a unifying framework that generalizes both linear programming and quadratically-constrained quadratic programming, while also yielding efficient solvers, both in theory and in practice. However, there exist known impossibility results for approximating the optimal solution when constraints for covering SDPs arrive in an online fashion. In this paper, we study online covering linear and semidefinite programs in which the algorithm is augmented with advice from a possibly erroneous predictor. We show that if the predictor is accurate, we can efficiently bypass these impossibility results and achieve a constant-factor approximation to the optimal solution, i.e., consistency. On the other hand, if the predictor is inaccurate, under some technical conditions, we achieve results that match both the classical optimal upper bounds and the tight lower bounds up to constant factors, i.e., robustness. More broadly, we introduce a framework that extends both (1) the online set cover problem augmented with machine-learning predictors, studied by Bamas, Maggiori, and Svensson (NeurIPS 2020), and (2) the online covering SDP problem, initiated by Elad, Kale, and Naor (ICALP 2016). Specifically, we obtain general online learning-augmented algorithms for covering linear programs with fractional advice and constraints, and initiate the study of learning-augmented algorithms for covering SDP problems. Our techniques are based on the primal-dual framework of Buchbinder and Naor (Mathematics of Operations Research, 34, 2009) and can be further adjusted to handle constraints where the variables lie in a bounded region, i.e., box constraints.
Author Information
Elena Grigorescu (Purdue University)
Young-San Lin (Purdue University)
Sandeep Silwal (Massachusetts Institute of Technology)
Maoyuan Song (Purdue University)
Samson Zhou (Carnegie Mellon University)
More from the Same Authors
-
2022 Spotlight: Lightning Talks 4A-2 »
Barakeel Fanseu Kamhoua · Hualin Zhang · Taiki Miyagawa · Tomoya Murata · Xin Lyu · Yan Dai · Elena Grigorescu · Zhipeng Tu · Lijun Zhang · Taiji Suzuki · Wei Jiang · Haipeng Luo · Lin Zhang · Xi Wang · Young-San Lin · Huan Xiong · Liyu Chen · Bin Gu · Jinfeng Yi · Yongqiang Chen · Sandeep Silwal · Yiguang Hong · Maoyuan Song · Lei Wang · Tianbao Yang · Han Yang · MA Kaili · Samson Zhou · Deming Yuan · Bo Han · Guodong Shi · Bo Li · James Cheng -
2022 Spotlight: Learning-Augmented Algorithms for Online Linear and Semidefinite Programming »
Elena Grigorescu · Young-San Lin · Sandeep Silwal · Maoyuan Song · Samson Zhou -
2022 Panel: Panel 3C-2: Rethinking Knowledge Graph… & Faster Linear Algebra… »
Sandeep Silwal · Haotong Yang -
2022 Poster: Faster Linear Algebra for Distance Matrices »
Piotr Indyk · Sandeep Silwal -
2022 Poster: Exponentially Improving the Complexity of Simulating the Weisfeiler-Lehman Test with Graph Neural Networks »
Anders Aamand · Justin Chen · Piotr Indyk · Shyam Narayanan · Ronitt Rubinfeld · Nicholas Schiefer · Sandeep Silwal · Tal Wagner -
2021 Poster: Dimensionality Reduction for Wasserstein Barycenter »
Zachary Izzo · Sandeep Silwal · Samson Zhou -
2021 Poster: Adversarial Robustness of Streaming Algorithms through Importance Sampling »
Vladimir Braverman · Avinatan Hassidim · Yossi Matias · Mariano Schain · Sandeep Silwal · Samson Zhou -
2017 Poster: Communication-Efficient Distributed Learning of Discrete Distributions »
Ilias Diakonikolas · Elena Grigorescu · Jerry Li · Abhiram Natarajan · Krzysztof Onak · Ludwig Schmidt -
2017 Oral: Communication-Efficient Distributed Learning of Discrete Distributions »
Ilias Diakonikolas · Elena Grigorescu · Jerry Li · Abhiram Natarajan · Krzysztof Onak · Ludwig Schmidt