Collaborative Large Language Model Inference via Resource-Aware Parallel Speculative Decoding
Koh, Jungyeon, Yang, Hyun Jong
–arXiv.org Artificial Intelligence
The growing demand for on-device large language model (LLM) inference highlights the need for efficient mobile edge computing (MEC) solutions, especially in resource-constrained settings. Speculative decoding offers a promising solution by partitioning token generation between a lightweight draft model on mobile devices and a powerful target model on edge servers, but suffers from communication overhead and asynchronous delays. This paper is the first to propose a unified framework that jointly optimizes user association and resource allocation (UARA) to support efficient parallel speculative decoding. We solve the UARA problem using a multi-agent deep reinforcement learning algorithm. To evaluate our approach under realistic conditions, we conduct experiments using the Sionna simulator. Results show that our method achieves up to 28.0% and an average of 23.7% reduction in end-to-end latency without compromising inference accuracy, enabling scalable and low-latency LLM services in MEC systems.
arXiv.org Artificial Intelligence
Dec-1-2025
- Country:
- Asia > South Korea
- North America > United States (0.04)
- Genre:
- Research Report > New Finding (0.34)
- Industry:
- Energy (0.47)
- Information Technology (0.46)
- Technology: