Skip to yearly menu bar Skip to main content

Workshop: New Frontiers in Graph Learning (GLFrontiers)

RL4CO: a Unified Reinforcement Learning for Combinatorial Optimization Library

Federico Berto · Chuanbo Hua · Junyoung Park · Minsu Kim · Hyeonah Kim · Jiwoo SON · HAEYEON KIM · joungho kim · Jinkyoo Park

Keywords: [ benchmark ] [ Combinatorial Optimization ] [ Reinforcement Learning ] [ Neural Combinatorial Optimization ] [ Library ]


Deep reinforcement learning offers notable benefits in addressing combinatorial problems over traditional solvers, reducing the reliance on domain-specific knowledge and expert solutions, and improving computational efficiency. Despite the recent surge in interest in neural combinatorial optimization, practitioners often do not have access to a standardized code base. Moreover, different algorithms are frequently based on fragmentized implementations that hinder reproducibility and fair comparison. To address these challenges, we introduce RL4CO, a unified Reinforcement Learning (RL) for Combinatorial Optimization (CO) library. We employ state-of-the-art software and best practices in implementation, such as modularity and configuration management, to be flexible, easily modifiable, and extensible by researchers. Thanks to our unified codebase, we benchmark baseline RL solvers with different evaluation schemes on zero-shot performance, generalization, and adaptability on diverse tasks. Notably, we find that some recent methods may fall behind their predecessors depending on the evaluation settings. We hope RL4CO will encourage the exploration of novel solutions to complex real-world tasks, allowing the community to compare with existing methods through a unified framework that decouples the science from software engineering. We open-source our library at

Chat is not available.