Learning to Learn with Quantum Optimization via Quantum Neural Networks
Chen, Kuan-Cheng, Matsuyama, Hiromichi, Huang, Wei-Hao
–arXiv.org Artificial Intelligence
Y et, their performance and scalability often hinge on effective parameter optimization, which remains nontrivial due to rugged energy landscapes and hardware noise. In this work, we introduce a quantum meta-learning framework that combines quantum neural networks, specifically Quantum Long Short-T erm Memory (QLSTM) architectures, with QAOA. By training the QLSTM optimizer on smaller graph instances, our approach rapidly generalizes to larger, more complex problems, substantially reducing the number of iterations required for convergence. Through comprehensive benchmarks on Max-Cut and Sherrington-Kirkpatrick model instances, we demonstrate that QLSTM-based optimizers converge faster and achieve higher approximation ratios compared to classical baselines, thereby offering a robust pathway toward scalable quantum optimization in the NISQ era.
arXiv.org Artificial Intelligence
May-2-2025
- Country:
- Asia > Japan
- Honshū > Kantō
- Tokyo Metropolis Prefecture > Tokyo (0.04)
- Shikoku > Ehime Prefecture
- Matsuyama (0.04)
- Honshū > Kantō
- Europe > United Kingdom
- England > Greater London > London (0.04)
- Asia > Japan
- Genre:
- Research Report (0.82)
- Technology: