Optimization Strategies for Enhancing Resource Efficiency in Transformers & Large Language Models
Wallace, Tom, Ezzati-Jivan, Naser, Ombuki-Berman, Beatrice
–arXiv.org Artificial Intelligence
Advancements in Natural Language Processing are heavily reliant on the Transformer architecture, whose improvements come at substantial resource costs due to ever-growing model sizes. This study explores optimization techniques, including Quantization, Knowledge Distillation, and Pruning, focusing on energy and computational efficiency while retaining performance. Among standalone methods, 4-bit Quantization significantly reduces energy use with minimal accuracy loss. Hybrid approaches, like NVIDIA's Minitron approach combining KD and Structured Pruning, further demonstrate promising trade-offs between size reduction and accuracy retention. A novel optimization equation is introduced, offering a flexible framework for comparing various methods. Through the investigation of these compression methods, we provide valuable insights for developing more sustainable and efficient LLMs, shining a light on the often-ignored concern of energy efficiency.
arXiv.org Artificial Intelligence
Jan-16-2025
- Country:
- Europe > Italy
- Calabria > Catanzaro Province
- Catanzaro (0.04)
- Tuscany > Florence (0.04)
- Calabria > Catanzaro Province
- North America > Canada
- Ontario > Niagara Region > St. Catharines (0.14)
- Europe > Italy
- Genre:
- Research Report > New Finding (0.68)
- Industry:
- Education (0.68)
- Energy (0.48)
- Information Technology (0.48)
- Technology: