Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Distribution Shifts: Connecting Methods and Applications

Scale-conditioned Adaptation for Large Scale Combinatorial Optimization

Minsu Kim · Jiwoo SON · Hyeonah Kim · Jinkyoo Park


Abstract:

Deep reinforcement learning (DRL) for combinatorial optimization has drawn attention as an alternative for human-designed solvers. However, training DRL solvers for large-scale tasks remains challenging due to combinatorial optimization problems' NP-hardness. This paper proposes a novel \textit{scale-conditioned adaptation} (SCA) scheme that improves the transferability of the pre-trained solvers on larger-scale tasks. The main idea is to design a scale-conditioned policy by plugging a simple deep neural network, denoted as \textit{scale-conditioned network} (SCN), into the existing DRL model. SCN extracts a hidden vector from a scale value, and then we add it to the representation vector of the pre-trained DRL model. The increment of the representation vector captures the context of scale information and helps the pre-trained model effectively adapt the policy to larger-scale tasks. Our method is verified to improve the zero-shot and few-shot performance of DRL-based solvers in various large-scale combinatorial optimization tasks.

Chat is not available.