Timezone: »
Deep reinforcement learning (DRL) for combinatorial optimization has drawn attention as an alternative for human-designed solvers. However, training DRL solvers for large-scale tasks remains challenging due to combinatorial optimization problems' NP-hardness. This paper proposes a novel \textit{scale-conditioned adaptation} (SCA) scheme that improves the transferability of the pre-trained solvers on larger-scale tasks. The main idea is to design a scale-conditioned policy by plugging a simple deep neural network, denoted as \textit{scale-conditioned network} (SCN), into the existing DRL model. SCN extracts a hidden vector from a scale value, and then we add it to the representation vector of the pre-trained DRL model. The increment of the representation vector captures the context of scale information and helps the pre-trained model effectively adapt the policy to larger-scale tasks. Our method is verified to improve the zero-shot and few-shot performance of DRL-based solvers in various large-scale combinatorial optimization tasks.
Author Information
Minsu Kim (Korea Advanced Institute of Science and Technology)
Jiwoo SON (Korea Advanced Institute of Science & Technology)
Hyeonah Kim (Korea Advanced Institute of Science and Technology)
Jinkyoo Park (KAIST)
More from the Same Authors
-
2022 : Collaborative symmetricity exploitation for offline learning of hardware design solver »
HAEYEON KIM · Minsu Kim · joungho kim · Jinkyoo Park -
2022 : Neural Coarsening Process for Multi-level Graph Combinatorial Optimization »
Hyeonah Kim · Minsu Kim · Changhyun Kwon · Jinkyoo Park -
2023 Poster: Bootstrapped Training of Score-Conditioned Generator for Offline Design of Biological Sequences »
Minsu Kim · Federico Berto · Sungsoo Ahn · Jinkyoo Park -
2023 Poster: Learning Efficient Surrogate Dynamic Models with Graph Spline Networks »
Chuanbo Hua · Federico Berto · Michael Poli · Stefano Massaroli · Jinkyoo Park -
2022 Poster: Sym-NCO: Leveraging Symmetricity for Neural Combinatorial Optimization »
Minsu Kim · Junyoung Park · Jinkyoo Park -
2022 Poster: Learning NP-Hard Multi-Agent Assignment Planning using GNN: Inference on a Random Graph and Provable Auction-Fitted Q-learning »
HYUNWOOK KANG · Taehwan Kwon · Jinkyoo Park · James R. Morrison -
2022 Poster: Transform Once: Efficient Operator Learning in Frequency Domain »
Michael Poli · Stefano Massaroli · Federico Berto · Jinkyoo Park · Tri Dao · Christopher RĂ© · Stefano Ermon -
2021 : Neural Solvers for Fast and Accurate Numerical Optimal Control »
Federico Berto · Stefano Massaroli · Michael Poli · Jinkyoo Park -
2021 : TorchDyn: Implicit Models and Neural Numerical Methods in PyTorch »
Michael Poli · Stefano Massaroli · Atsushi Yamashita · Hajime Asama · Jinkyoo Park · Stefano Ermon -
2021 Poster: Differentiable Multiple Shooting Layers »
Stefano Massaroli · Michael Poli · Sho Sonoda · Taiji Suzuki · Jinkyoo Park · Atsushi Yamashita · Hajime Asama -
2021 Poster: Learning Collaborative Policies to Solve NP-hard Routing Problems »
Minsu Kim · Jinkyoo Park · joungho kim -
2021 Poster: Neural Hybrid Automata: Learning Dynamics With Multiple Modes and Stochastic Transitions »
Michael Poli · Stefano Massaroli · Luca Scimeca · Sanghyuk Chun · Seong Joon Oh · Atsushi Yamashita · Hajime Asama · Jinkyoo Park · Animesh Garg -
2020 Poster: Dissecting Neural ODEs »
Stefano Massaroli · Michael Poli · Jinkyoo Park · Atsushi Yamashita · Hajime Asama -
2020 Poster: Hypersolvers: Toward Fast Continuous-Depth Models »
Michael Poli · Stefano Massaroli · Atsushi Yamashita · Hajime Asama · Jinkyoo Park -
2020 Oral: Dissecting Neural ODEs »
Stefano Massaroli · Michael Poli · Jinkyoo Park · Atsushi Yamashita · Hajime Asama