Fast Variability Approximation: Speeding up Divergence-Based Distributionally Robust Optimization via Directed Perturbation
Henry Lam · Mohamed Lakhnichi
Abstract
Distributionally Robust Optimization (DRO) has become a popular paradigm for decision-making under uncertainty, especially when the uncertainty raises from the underlying distributions in stochastic problems. While DRO has been known to enjoy a range of robustness and statistical advantages, it also pays the cost of additional computational overheads. Moreover, this cost can amplify extensively in the phase of hyperparameter tuning. We show that, in the case of $\phi$-divergence uncertainty set, simply perturbing an empirical optimizer (i.e., solution from sample average approximation) in a statistically guided fashion achieves almost the same generalization effect as DRO. Importantly, this perturbation avoids the expensive overheads of DRO as long as the problem is smooth enough, which allows suitable gradient extraction via direct computation or resampling-based methods.
Chat is not available.
Successful Page Load