SSSD: Simply-Scalable Speculative Decoding
Marzollo, Michele, Zhuang, Jiawei, Roemer, Niklas, Müller, Lorenz K., Cavigelli, Lukas
–arXiv.org Artificial Intelligence
Over the past year, Speculative Decoding has gained popularity as a technique for accelerating Large Language Model inference. While several methods have been introduced, most struggle to deliver satisfactory performance at batch sizes typical for data centers ($\geq 8$) and often involve significant deployment complexities. In this work, we offer a theoretical explanation of how Speculative Decoding can be effectively utilized with larger batch sizes. We also introduce a method that integrates seamlessly into existing systems without additional training or the complexity of deploying a small LLM. In a continuous batching setting, we achieve a 4x increase in throughput without any latency impact for short context generation, and a 1.7-2x improvement in both latency and throughput for longer contexts.
arXiv.org Artificial Intelligence
Nov-8-2024
- Country:
- Genre:
- Research Report (1.00)
- Industry:
- Information Technology (0.34)
- Technology: