Decay Pruning Method: Smooth Pruning With a Self-Rectifying Procedure

Yang, Minghao, Gao, Linlin, Li, Pengyuan, Li, Wenbo, Dong, Yihong, Cui, Zhiying

arXiv.org Artificial Intelligence 

Deep Neural Networks (DNNs) have been widely used for various applications, such as image classification [22; 40], object segmentation [33; 35], and object detection [6; 43]. However, the increasing size and complexity of DNNs often result in substantial computational and memory requirements, posing challenges for deployment on resource-constrained platforms, such as mobile or embedded devices. Consequently, developing efficient methods to reduce the computational complexity and storage demands of large models, while minimizing performance degradation, has become essential. Network pruning is one of the most popular methods in model compression. Specifically, current network pruning methods are categorized into unstructured and structured pruning [5]. Unstructured pruning [11; 24] focuses on eliminating individual weights from a network to create fine-grained sparsity. Although these approaches achieve an excellent balance between model size reduction and accuracy retention, they often require specific hardware support for acceleration, which is impractical for general-purpose computing environments. Conversely, structured pruning [23; 18; 29] avoids these hardware dependencies by eliminating redundant network structures, thus introducing a more manageable and hardware-compatible form of sparsity. As a result, structured pruning has become popular and is extensively utilized.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found