DISP-LLM: Dimension-Independent Structural Pruning for Large Language Models
–Neural Information Processing Systems
Large Language Models (LLMs) have achieved remarkable success in various natural language processing tasks, including language modeling, understanding, and generation. However, the increased memory and computational costs associated with these models pose significant challenges for deployment on resource-limited devices. Structural pruning has emerged as a promising solution to reduce the costs of LLMs without requiring post-processing steps. Prior structural pruning methods either follow the dependence of structures at the cost of limiting flexibility, or introduce non-trivial additional parameters by incorporating different projection matrices. In this work, we propose a novel approach that relaxes the constraint imposed by regular structural pruning methods and eliminates the structural dependence along the embedding dimension.
Neural Information Processing Systems
May-30-2025, 17:18:48 GMT
- Genre:
- Research Report
- Experimental Study (0.93)
- Promising Solution (0.86)
- Research Report
- Industry:
- Information Technology (0.46)
- Technology: