Learning Linear Programs from Optimal Decisions
Yingcong Tan, Daria Terekhov, Andrew Delong
Spotlight presentation: Orals & Spotlights Track 17: Kernel Methods/Optimization
on 2020-12-09T08:10:00-08:00 - 2020-12-09T08:20:00-08:00
on 2020-12-09T08:10:00-08:00 - 2020-12-09T08:20:00-08:00
Toggle Abstract Paper (in Proceedings / .pdf)
Abstract: We propose a flexible gradient-based framework for learning linear programs from optimal decisions. Linear programs are often specified by hand, using prior knowledge of relevant costs and constraints. In some applications, linear programs must instead be learned from observations of optimal decisions. Learning from optimal decisions is a particularly challenging bilevel problem, and much of the related inverse optimization literature is dedicated to special cases. We tackle the general problem, learning all parameters jointly while allowing flexible parameterizations of costs, constraints, and loss functions. We also address challenges specific to learning linear programs, such as empty feasible regions and non-unique optimal decisions. Experiments show that our method successfully learns synthetic linear programs and minimum-cost multi-commodity flow instances for which previous methods are not directly applicable. We also provide a fast batch-mode PyTorch implementation of the homogeneous interior point algorithm, which supports gradients by implicit differentiation or backpropagation.