Efficient Large Language Model Inference with Neural Block Linearization
Erdogan, Mete, Tonin, Francesco, Cevher, Volkan
–arXiv.org Artificial Intelligence
The high inference demands of transformer-based Large Language Models (LLMs) pose substantial challenges in their deployment. To this end, we introduce Neural Block Linearization (NBL), a novel framework for accelerating transformer model inference by replacing self-attention layers with linear approximations derived from Linear Minimum Mean Squared Error estimators. NBL leverages Canonical Correlation Analysis to compute a theoretical upper bound on the approximation error. Then, we use this bound as a criterion for substitution, selecting the LLM layers with the lowest linearization error. NBL can be efficiently applied to pre-trained LLMs without the need for fine-tuning. In experiments, NBL achieves notable computational speed-ups while preserving competitive accuracy on multiple reasoning benchmarks. For instance, applying NBL to 12 self-attention layers in DeepSeek-R1-Distill-Llama-8B increases the inference speed by 32% with less than 1% accuracy trade-off, making it a flexible and promising solution to improve the inference efficiency of LLMs. The implementation is available at: https://github.com/LIONS-EPFL/NBL.
arXiv.org Artificial Intelligence
Oct-21-2025
- Country:
- Europe
- Italy > Calabria
- Catanzaro Province > Catanzaro (0.04)
- Switzerland > Vaud
- Lausanne (0.04)
- Italy > Calabria
- North America > United States
- California > Santa Clara County > Palo Alto (0.04)
- Europe
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Government (0.46)
- Technology: