Unrolled Neural Networks for Constrained Optimization
Samar Hadou · Alejandro Ribeiro
Abstract
In this paper, we develop unrolled neural networks for constrained optimization problems, offering accelerated, learnable counterparts to dual ascent (DA) algorithms. Our framework comprises two layer-wise interacting networks that seek a Lagrangian saddle point, trained via a joint alternating procedure that updates the networks in tandem. The primal network finds a stationary point for a given dual multiplier, while the dual network iteratively refines the multipliers toward optimality. We match DA dynamics by imposing descent (primal) and ascent (dual) constraints during training. Our numerical experiments demonstrate that our approach yields near-optimal near-feasible solutions and enhance out-of-distribution (OOD) generalization.
Chat is not available.
Successful Page Load