Reinforcement Learning for Ising Models: Datasets and Benchmark
Abstract
Searching for the ground state of Ising models remains an century-old unsolvedproblem, crucial for its analysis of physical systems [33, 38, 39] and abstractionto combinatorial optimization problems [35, 51]. However, due to huge dis-crete space and rough or glassy optimization landscape of Ising models, heuristicmethods are computationally infeasible at large scale. Reinforcement learningalgorithms provide a promising alternative for obtaining high-quality suboptimalminima. However, there is no established dataset for benchmarking RL methodson Ising problems. In this paper, we curated a comprehensive dataset of over190,000 Ising instances and a state-of-the-art (SOTA) benchmark of solvers andRL methods. Our work promotes interdisciplinary physics applications with theML community and encourages physicists to apply the expertise of the ML com-munity to their problems. Furthermore, we propose a novel transformer-basedpolicy framework. Our experiments demonstrate state-of-the-art-level effective-ness and scalability with around 1% - 5% gap to industry level solvers on largescale Ising problems. Datasets and benchmarks are open source at link.