Generalizable Heuristic Generation Through Large Language Models with Meta-Optimization
Shi, Yiding, Zhou, Jianan, Song, Wen, Bi, Jieyi, Wu, Yaoxin, Zhang, Jie
–arXiv.org Artificial Intelligence
Heuristic design with large language models (LLMs) has emerged as a promising approach for tackling combinatorial optimization problems (COPs). However, existing approaches often rely on manually predefined evolutionary computation (EC) optimizers and single-task training schemes, which may constrain the exploration of diverse heuristic algorithms and hinder the generalization of the resulting heuristics. To address these issues, we propose Meta-Optimization of Heuristics (MoH), a novel framework that operates at the optimizer level, discovering effective optimizers through the principle of meta-learning. Specifically, MoH leverages LLMs to iteratively refine a meta-optimizer that autonomously constructs diverse optimizers through (self-)invocation, thereby eliminating the reliance on a predefined EC optimizer. These constructed optimizers subsequently evolve heuristics for downstream tasks, enabling broader heuristic exploration. Moreover, MoH employs a multi-task training scheme to promote its generalization capability. Experiments on classic COPs demonstrate that MoH constructs an effective and interpretable meta-optimizer, achieving state-of-the-art performance across various downstream tasks, particularly in cross-size settings.
arXiv.org Artificial Intelligence
May-28-2025
- Country:
- Asia
- China > Guangdong Province
- Guangzhou (0.04)
- Middle East > Jordan (0.04)
- Singapore (0.04)
- China > Guangdong Province
- Europe
- Italy (0.04)
- Netherlands > North Brabant
- Eindhoven (0.04)
- Spain > Andalusia
- Málaga Province > Málaga (0.04)
- Asia
- Genre:
- Research Report > Promising Solution (0.48)
- Industry:
- Energy (0.46)
- Technology: