Ensemble-based Deep Reinforcement Learning for Vehicle Routing Problems under Distribution Shift

YUAN JIANG · Zhiguang Cao · Yaoxin Wu · Wen Song · Jie Zhang

Great Hall & Hall B1+B2 (level 1) #1200
[ ]
Wed 13 Dec 8:45 a.m. PST — 10:45 a.m. PST


While performing favourably on the independent and identically distributed (i.i.d.) instances, most of the existing neural methods for vehicle routing problems (VRPs) struggle to generalize in the presence of a distribution shift. To tackle this issue, we propose an ensemble-based deep reinforcement learning method for VRPs, which learns a group of diverse sub-policies to cope with various instance distributions. In particular, to prevent convergence of the parameters to the same one, we enforce diversity across sub-policies by leveraging Bootstrap with random initialization. Moreover, we also explicitly pursue inequality between sub-policies by exploiting regularization terms during training to further enhance diversity. Experimental results show that our method is able to outperform the state-of-the-art neural baselines on randomly generated instances of various distributions, and also generalizes favourably on the benchmark instances from TSPLib and CVRPLib, which confirmed the effectiveness of the whole method and the respective designs.

Chat is not available.