A Channel-Pruned and Weight-Binarized Convolutional Neural Network for Keyword Spotting
Lyu, Jiancheng, Sheen, Spencer
We study channel number reduction in combination with weight binarization (1-bit weight precision) to trim a convolutional neural network for a keyword spotting (classification) task. We adopt a group-wise splitting method based on the group Lasso penalty to achieve over 50 % channel sparsity while maintaining the network performance within 0.25 % accuracy loss. We show an effective three-stage procedure to balance accuracy and sparsity in network training. Keywords: Convolutional Neural Network · Channel Pruning · Weight Binarization · Classification. 1 Introduction Reducing complexity of neural networks while maintaining their performance is both fundamental and practical for resource limited platforms such as mobile phones. In this paper, we integrate two methods, namely channel pruning and weight quantization, to trim down the number of parameters for a keyword spotting convolutional neural network (CNN, [4]).
Sep-12-2019
- Country:
- North America > United States > California
- Orange County (0.14)
- San Diego County (0.14)
- North America > United States > California
- Genre:
- Research Report (0.50)
- Technology: