Automatically Learning Compact Quality-aware Surrogates for Optimization Problems
Kai Wang, Bryan Wilder, Andrew Perrault, Milind Tambe
Spotlight presentation: Orals & Spotlights Track 17: Kernel Methods/Optimization
on 2020-12-09T08:20:00-08:00 - 2020-12-09T08:30:00-08:00
on 2020-12-09T08:20:00-08:00 - 2020-12-09T08:30:00-08:00
Toggle Abstract Paper (in Proceedings / .pdf)
Abstract: Solving optimization problems with unknown parameters often requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values. Recent work has shown that including the optimization problem as a layer in the model training pipeline results in predictions of the unobserved parameters that lead to higher decision quality. Unfortunately, this process comes at a large computational cost because the optimization problem must be solved and differentiated through in each training iteration; furthermore, it may also sometimes fail to improve solution quality due to non-smoothness issues that arise when training through a complex optimization layer. To address these shortcomings, we learn a low-dimensional surrogate model of a large optimization problem by representing the feasible space in terms of meta-variables, each of which is a linear combination of the original variables. By training a low-dimensional surrogate model end-to-end, and jointly with the predictive model, we achieve: i) a large reduction in training and inference time; and ii) improved performance by focusing attention on the more important variables in the optimization and learning in a smoother space. Empirically, we demonstrate these improvements on a non-convex adversary modeling task, a submodular recommendation task and a convex portfolio optimization task.