Timezone: »

Scale-conditioned Adaptation for Large Scale Combinatorial Optimization
Minsu Kim · Jiwoo SON · Hyeonah Kim · Jinkyoo Park
Event URL: https://openreview.net/forum?id=oy8hDBI8Qx »

Deep reinforcement learning (DRL) for combinatorial optimization has drawn attention as an alternative for human-designed solvers. However, training DRL solvers for large-scale tasks remains challenging due to combinatorial optimization problems' NP-hardness. This paper proposes a novel \textit{scale-conditioned adaptation} (SCA) scheme that improves the transferability of the pre-trained solvers on larger-scale tasks. The main idea is to design a scale-conditioned policy by plugging a simple deep neural network, denoted as \textit{scale-conditioned network} (SCN), into the existing DRL model. SCN extracts a hidden vector from a scale value, and then we add it to the representation vector of the pre-trained DRL model. The increment of the representation vector captures the context of scale information and helps the pre-trained model effectively adapt the policy to larger-scale tasks. Our method is verified to improve the zero-shot and few-shot performance of DRL-based solvers in various large-scale combinatorial optimization tasks.

Author Information

Minsu Kim (Korea Advanced Institute of Science and Technology)
Jiwoo SON (Korea Advanced Institute of Science & Technology)
Hyeonah Kim (Korea Advanced Institute of Science and Technology)
Jinkyoo Park (KAIST)

More from the Same Authors