Reversible Action Design for Combinatorial Optimization with Reinforcement Learning
Yao, Fan, Cai, Renqin, Wang, Hongning
–arXiv.org Artificial Intelligence
Combinatorial optimization problem (COP) over graphs is a fundamental challenge in optimization. Reinforcement learning (RL) has recently emerged as a new framework to tackle these problems and has demonstrated promising results. However, most RL solutions employ a greedy manner to construct the solution incrementally, thus inevitably pose unnecessary dependency on action sequences and need a lot of problem-specific designs. We propose a general RL framework that not only exhibits state-of-the-art empirical performance but also generalizes to a variety class of COPs. Specifically, we define state as a solution to a problem instance and action as a perturbation to this solution. We utilize graph neural networks (GNN) to extract latent representations for given problem instances for state-action encoding, and then apply deep Q-learning to obtain a policy that gradually refines the solution by flipping or swapping vertex labels. Experiments are conducted on Maximum $k$-Cut and Traveling Salesman Problem and performance improvement is achieved against a set of learning-based and heuristic baselines.
arXiv.org Artificial Intelligence
Sep-2-2022
- Country:
- North America > United States
- California > San Francisco County
- San Francisco (0.04)
- Virginia (0.04)
- California > San Francisco County
- North America > United States
- Genre:
- Research Report (0.64)
- Workflow (0.48)
- Technology: