Efficient Large Language Models with Zero-Shot Adjustable Acceleration
Kachuee, Sajjad, Sharifkhani, Mohammad
–arXiv.org Artificial Intelligence
Using Large Language Models (LLMs) in real-world applications presents significant challenges, particularly in balancing computational efficiency with model performance. Optimizing acceleration after fine-tuning and during inference is critical for building efficient architectures. This paper introduces Zero-Shot Adjustable Acceleration, a novel training and inference method that dynamically adjusts hardware utilization during inference without requiring additional fine-tuning. The proposed approach is applied to recent LLMs and evaluated across multiple classification and text generation tasks. Experimental results demonstrate that the method supports a wide range of zero-shot acceleration and achieves up to 11x speedup compared to the baseline.
arXiv.org Artificial Intelligence
Sep-9-2025
- Country:
- Asia > Middle East
- Iran > Tehran Province
- Tehran (0.04)
- Jordan (0.04)
- Iran > Tehran Province
- Europe > Germany
- Berlin (0.04)
- North America > United States (0.04)
- South America > Chile
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.88)
- Technology: