RGP: Neural Network Pruning through Its Regular Graph Structure
Chen, Zhuangzhi, Xiang, Jingyang, Lu, Yao, Xuan, Qi
–arXiv.org Artificial Intelligence
Such process is called the pruning of Lightweight model design has become an important direction the neural network. The pruned neural network can usually in the application of deep learning technology, obtain much faster inference speed than the original model, pruning is an effective mean to achieve a large reduction in which has high significance in actual deployment when the model parameters and FLOPs. The existing neural network efficiency of the model is critical. The pruning of neural network pruning methods mostly start from the importance of parameters, can usually be seen as a three-step pipeline: training and design parameter evaluation metrics to perform the original model, parameter pruning, and fine-tuning the parameter pruning iteratively. These methods are not pruned model. Thus most of these network pruning methods studied from the perspective of model topology, may be effective are data-related, i.e, when the model training is completed, but not efficient, and requires completely different the parameters are pruned according to their values, pruning for different datasets. In this paper, we study the which means that for different datasets, the pruned neural graph structure of the neural network, and propose regular network is different; while some pruning methods perform graph based pruning (RGP) to perform a one-shot neural pruning on the initialized model, such as the lottery ticket network pruning. We generate a regular graph, set the hypothesis [7], but it is still implemented by pruning after node degree value of the graph to meet the pruning ratio, pre-training the original model, it uses the initialized parameters and reduce the average shortest path length of the graph by to reset the pruned model so that an initialized subnetwork swapping the edges to obtain the optimal edge distribution.
arXiv.org Artificial Intelligence
Oct-28-2021