Plotting

 Yang, Yunfei


Approximation in shift-invariant spaces with deep ReLU neural networks

arXiv.org Machine Learning

We construct deep ReLU neural networks to approximate functions in dilated shift-invariant spaces generated by a continuous function with compact support and study the approximation rates with respect to the number of neurons. The network construction is based on the bit extraction and data fitting capacity of deep neural networks. Combining with existing results of approximation from shift-invariant spaces, we are able to estimate the approximation rates of classical function spaces such as Sobolev spaces and Besov spaces. We also give lower bounds of the $L^p([0,1]^d)$ approximation error for Sobolev spaces, which show that our construction is asymptotically optimal up to a logarithm factor.


Progressive Weight Pruning of Deep Neural Networks using ADMM

arXiv.org Machine Learning

Deep neural networks (DNNs) although achieving human-level performance in many domains, have very large model size that hinders their broader applications on edge computing devices. Extensive research work have been conducted on DNN model compression or pruning. However, most of the previous work took heuristic approaches. This work proposes a progressive weight pruning approach based on ADMM (Alternating Direction Method of Multipliers), a powerful technique to deal with non-convex optimization problems with potentially combinatorial constraints. Motivated by dynamic programming, the proposed method reaches extremely high pruning rate by using partial prunings with moderate pruning rates. Therefore, it resolves the accuracy degradation and long convergence time problems when pursuing extremely high pruning ratios. It achieves up to 34 times pruning rate for ImageNet dataset and 167 times pruning rate for MNIST dataset, significantly higher than those reached by the literature work. Under the same number of epochs, the proposed method also achieves faster convergence and higher compression rates. The codes and pruned DNN models are released in the link bit.ly/2zxdlss