Timezone: »
Stochastic Programming is a powerful modeling framework for decision-making under uncertainty. In this work, we tackle two-stage stochastic programs (2SPs), the most widely used class of stochastic programming models. Solving 2SPs exactly requires optimizing over an expected value function that is computationally intractable. Having a mixed-integer linear program (MIP) or a nonlinear program (NLP) in the second stage further aggravates the intractability, even when specialized algorithms that exploit problem structure are employed.Finding high-quality (first-stage) solutions -- without leveraging problem structure -- can be crucial in such settings. We develop Neur2SP, a new method that approximates the expected value function via a neural network to obtain a surrogate model that can be solved more efficiently than the traditional extensive formulation approach. Neur2SP makes no assumptions about the problem structure, in particular about the second-stage problem, and can be implemented using an off-the-shelf MIP solver. Our extensive computational experiments on four benchmark 2SP problem classes with different structures (containing MIP and NLP second-stage problems) demonstrate the efficiency (time) and efficacy (solution quality) of Neur2SP. In under 1.66 seconds, Neur2SP finds high-quality solutions across all problems even as the number of scenarios increases, an ideal property that is difficult to have for traditional 2SP solution techniques. Namely, the most generic baseline method typically requires minutes to hours to find solutions of comparable quality.
Author Information
Rahul Mihir Patel (University of Toronto)
Justin Dumouchelle (University of Toronto)
Elias Khalil (University of Toronto)
Merve Bodur (University of Toronto)
More from the Same Authors
-
2022 : PyEPO: A PyTorch-based End-to-End Predict-then-Optimize Library with Linear Objective Function »
Bo Tang · Elias Khalil -
2022 : Graphs, Constraints, and Search for the Abstraction and Reasoning Corpus »
Yudong Xu · Elias Khalil · Scott Sanner -
2022 Poster: A Deep Reinforcement Learning Framework for Column Generation »
Cheng Chi · Amine Aboussalah · Elias Khalil · Juyoung Wang · Zoha Sherkat-Masoumi -
2021 : Machine Learning for Combinatorial Optimization + Q&A »
Maxime Gasse · Simon Bowly · Chris Cameron · Quentin Cappart · Jonas Charfreitag · Laurent Charlin · Shipra Agrawal · Didier Chetelat · Justin Dumouchelle · Ambros Gleixner · Aleksandr Kazachkov · Elias Khalil · Pawel Lichocki · Andrea Lodi · Miles Lubin · Christopher Morris · Dimitri Papageorgiou · Augustin Parjadis · Sebastian Pokutta · Antoine Prouvost · Yuandong Tian · Lara Scavuzzo · Giulia Zarpellon -
2021 Poster: Learning to Schedule Heuristics in Branch and Bound »
Antonia Chmiela · Elias Khalil · Ambros Gleixner · Andrea Lodi · Sebastian Pokutta -
2020 Poster: Hybrid Models for Learning to Branch »
Prateek Gupta · Maxime Gasse · Elias Khalil · Pawan K Mudigonda · Andrea Lodi · Yoshua Bengio