Skip to yearly menu bar Skip to main content


Poster
in
Workshop: OPT 2023: Optimization for Machine Learning

K-Spin Ising Model for Combinatorial Optimizations over Graphs: An Reinforcement Learning Approach

Xiao-Yang Liu · Ming Zhu


Abstract: Many combinatorial optimization problems are defined on graphs and are difficult to solve due to the existence of many local optima. In this paper, we propose a K-spin Ising model for graph-based combinatorial optimization (GCO) problems inspired by the perspective of curriculum learning, which guide the policy neural networks to converge to high-quality solutions. First, we propose a \textit{K-spin Ising model} to minimize its \textit{Hamiltonian} in reinforcement learning (RL) over multiple steps on graphs, which is generic for GCO problems and easy to be used. Second, we provide general K-spin Ising formulations for NP-complete problems, and present detailed formulations together with interpretations for the classical graph maxcut problem. Third, we evaluate our RL approach based on both synthetic and benchmark datasets in the graph maxcut problem. In the synthetic dataset, our approach has a litter better (or the same) performance than (as) the commercial solver Gurobi \cite{2023gurobi} in small-scale scenarios (e.g., $100 \sim 3,000$ nodes) with hundreds of speedup, and outperforms Gurobi (with the improvement of $10\%$) in large-scale scenarios (e.g., $5,000 \sim 10,000$ nodes) with tens of speedup. In the benchmark dataset, our approach obtains better (or the same) performance than five comparison approaches.

Chat is not available.