SOSP: Efficiently Capturing Global Correlations by Second-Order Structured Pruning

Nonnenmacher, Manuel, Pfeil, Thomas, Steinwart, Ingo, Reeb, David

arXiv.org Machine Learning 

Pruning neural networks reduces inference time and memory costs. On standard hardware, these benefits will be especially prominent if coarse-grained structures, like feature maps, are pruned. We devise two novel saliency-based methods for second-order structured pruning (SOSP) which include correlations among all structures and layers. Our main method SOSP-H employs an innovative second-order approximation, which enables saliency evaluations by fast Hessian-vector products. We validate SOSP-H by comparing it to our second method SOSP-I that uses a well-established Hessian approximation, and to numerous state-of-the-art methods. While SOSP-H performs on par or better in terms of accuracy, it has clear advantages in terms of scalability and efficiency. This allowed us to scale SOSP-H to large-scale vision tasks, even though it captures correlations across all layers of the network. To underscore the global nature of our pruning methods, we evaluate their performance not only by removing structures from a pretrained network, but also by detecting architectural bottlenecks. We show that our algorithms allow to systematically reveal architectural bottlenecks, which we then remove to further increase the accuracy of the networks. Deep neural networks have consistently grown in size over the last years with increasing performance. However, this increase in size leads to slower inference, higher computational requirements and higher cost. To reduce the size of the networks without affecting their performance, a large number of pruning algorithms have been proposed (e.g., LeCun et al., 1990; Hassibi et al., 1993; Reed, 1993; Han et al., 2015; Blalock et al., 2020). Pruning can either be unstructured, i.e. removing individual weights, or structured, i.e. removing entire substructures like nodes or channels.