Decision Focused Scenario Generation for Contextual Two-Stage Stochastic Linear Programming
Jonathan Hornewall · Solène Delannoy-Pavy · Vincent Leclere · Tito Homem-De-Mello
Abstract
We introduce a decision-focused scenario generation framework for contextual two-stage stochastic linear programs that bypasses explicit conditional distribution modeling. A neural generator maps a context $x$ to a fixed-size set of scenarios $\{\xi_s(x)\}_{s=1}^S$. For each generated collection we compute a first-stage decision by solving a single log-barrier regularized deterministic equivalent whose KKT system yields closed-form, efficiently computable derivatives via implicit differentiation. The network is trained end-to-end to minimize the true (unregularized) downstream cost evaluated on observed data, avoiding auxiliary value-function surrogates, bi-level heuristics, or differentiation through generic LP solvers. Unlike single-scenario methods, our approach natively learns multi-scenario representations; unlike distribution-learning pipelines, it scales without requiring density estimation in high dimension. We detail the barrier formulation, the analytic gradient structure with respect to second-stage data, and the resulting computational trade-offs. Preliminary experiments on contextual synthetic instances illustrate that the method can rival current state-of-the-art methods, even when trained on small amounts of training data.
Chat is not available.
Successful Page Load