Skip to yearly menu bar Skip to main content


Poster

Learning Combinatorial Optimization Algorithms over Graphs

Elias Khalil · Hanjun Dai · Yuyu Zhang · Bistra Dilkina · Le Song

Pacific Ballroom #141

Keywords: [ Reinforcement Learning and Planning ] [ Deep Learning ] [ Combinatorial Optimization ]


Abstract:

The design of good heuristics or approximation algorithms for NP-hard combinatorial optimization problems often requires significant specialized knowledge and trial-and-error. Can we automate this challenging, tedious process, and learn the algorithms instead? In many real-world applications, it is typically the case that the same optimization problem is solved again and again on a regular basis, maintaining the same problem structure but differing in the data. This provides an opportunity for learning heuristic algorithms that exploit the structure of such recurring problems. In this paper, we propose a unique combination of reinforcement learning and graph embedding to address this challenge. The learned greedy policy behaves like a meta-algorithm that incrementally constructs a solution, and the action is determined by the output of a graph embedding network capturing the current state of the solution. We show that our framework can be applied to a diverse range of optimization problems over graphs, and learns effective algorithms for the Minimum Vertex Cover, Maximum Cut and Traveling Salesman problems.

Live content is unavailable. Log in and register to view live content