LLM-QFL: Distilling Large Language Model for Quantum Federated Learning
Gurung, Dev, Pokhrel, Shiva Raj
–arXiv.org Artificial Intelligence
Inspired by the power of large language models (LLMs), our research adapts them to quantum federated learning (QFL) to boost efficiency and performance. We propose a federated fine-tuning method that distills an LLM within QFL, allowing each client to locally adapt the model to its own data while preserving privacy and reducing unnecessary global updates. The fine-tuned LLM also acts as a reinforcement agent, optimizing QFL by adjusting optimizer steps, cutting down communication rounds, and intelligently selecting clients. Experiments show significant efficiency gains. We pioneer a synergy between LLM and QFL, offering: i) practical efficiency: Reduced communication costs and faster convergence. ii) theoretical rigor: Provable guarantees for adaptive federated optimization. iii) scalability: PEFT methods (LoRA, QLoRA) enable deployment on resource-constrained quantum devices. Code implementation is available here 1.
arXiv.org Artificial Intelligence
May-27-2025
- Country:
- Asia
- China (0.04)
- Philippines > Luzon
- National Capital Region > City of Manila (0.04)
- Europe > Ukraine
- Kyiv Oblast > Kyiv (0.04)
- North America
- Canada > British Columbia
- Vancouver (0.04)
- United States (0.04)
- Canada > British Columbia
- Oceania > Australia (0.14)
- Asia
- Genre:
- Research Report (0.64)
- Industry:
- Information Technology > Security & Privacy (0.46)
- Technology: