A Novel Hat-Shaped Device-Cloud Collaborative Inference Framework for Large Language Models
Xie, Zuan, Xu, Yang, Xu, Hongli, Liao, Yunming, Yao, Zhiwei
–arXiv.org Artificial Intelligence
Abstract--Recent advancements in large language models (LLMs) have catalyzed a substantial surge in demand for LLM services. While traditional cloud-based LLM services satisfy high-accuracy requirements, they fall short in meeting critical demands for low delay and enhanced privacy . T o address these limitations, we propose HA T, a novel device-cloud collaborative inference framework that leverages the complementary strengths of U-shaped inference and speculative decoding. HA T partitions the LLM into three submodels, and the input and output submodels, stacked with a lightweight adapter network, are deployed as a small language model (SLM) on each end device. Meanwhile, the middle submodel, encompassing the majority of the LLM's decoder layers, is hosted in the cloud to perform speculative decoding with on-device SLMs. During inference, HA T exchanges hidden states (rather than raw tokens) of input or draft tokens between devices and the cloud, thereby incurring substantial communication delays. Besides, processing hidden states of long prompts will exacerbate computation delays in the cloud, further compromising inference efficiency . T o improve efficiency, we introduce a prompt chunking mechanism that segments long prompts into shorter chunks, enabling parallel transmission and processing. Furthermore, HA T is implemented to dynamically determine optimal chunk sizes for devices handling long prompts, thereby improving overall inference speed. Extensive experiments are conducted on a physical testbed comprising 30 NVIDIA Jetson devices and a server with 8 NVIDIA A6000 GPUs. Experimental results demonstrate that HA T achieves promising performance improvements, reducing TTFT by 41% to 54% and TBT by 41% to 77% compared to the baselines. Recent advancements in large language models (LLMs) have revolutionized the field of natural language processing, demonstrating unprecedented capabilities across various tasks and triggering exponential growth of LLM services [1], [2]. For instance, OpenAI's ChatGPT provides various services, e.g., chat-based interaction, and automated writing, to approximately 180 million users, and processes over 1.6 billion requests monthly [3]. The underlying architecture of LLM services mainly operates through an autore-gressive process, which involves a prefill phase followed by a decode phase. In prefill phase, the LLM processes all input prompt tokens simultaneously, leveraging parallel computation to generate the initial output token.
arXiv.org Artificial Intelligence
Mar-23-2025
- Genre:
- Research Report > New Finding (0.66)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: