Timezone: »

Unsupervised Learning for Combinatorial Optimization with Principled Objective Relaxation
Haoyu Peter Wang · Nan Wu · Hang Yang · Cong Hao · Pan Li

Thu Dec 01 09:00 AM -- 11:00 AM (PST) @ Hall J #418

Using machine learning to solve combinatorial optimization (CO) problems is challenging, especially when the data is unlabeled. This work proposes an unsupervised learning framework for CO problems. Our framework follows the standard relaxation-plus-rounding approach and adopts neural networks to parameterize the relaxed solutions so that simple back-propagation can train them end-to-end. Our key contribution is the observation that if the relaxed objective satisfies entry-wise concavity, a low optimization loss guarantees the quality of the obtained integral solutions. This observation significantly generalizes the applicability of the previous framework inspired by Erdos' probabilistic method (Karalias & Loukas, 2020). Our framework is particularly suitable to guide the design of objective models in the applications where the objectives are not given explicitly while requiring being modeled and learned first. We evaluate our framework by solving a synthetic graph optimization problem, and two real-world applications including resource allocation in circuit design and approximate computing. Our framework largely outperforms the baselines based on reinforcement learning and Gumbel-softmax tricks.

Author Information

Haoyu Peter Wang (Purdue University)
Nan Wu (UC Santa Barbara)
Hang Yang (Nankai University)
Cong Hao (Georgia Institute of Technology)
Pan Li (Georgia Institute of Technology)

More from the Same Authors