DeepCABAC: Context-adaptive binary arithmetic coding for deep neural network compression
Wiedemann, Simon, Kirchhoffer, Heiner, Matlage, Stefan, Haase, Paul, Marban, Arturo, Marinc, Talmaj, Neumann, David, Osman, Ahmed, Marpe, Detlev, Schwarz, Heiko, Wiegand, Thomas, Samek, Wojciech
–arXiv.org Artificial Intelligence
From all different proposed We present DeepCABAC, a novel contextadaptive methods, sparsification followed by weight quantization and binary arithmetic coder for compressing entropy coding arguably belong to the set of most popular deep neural networks. It quantizes each weight parameter approaches, since very high compression ratios can be by minimizing a weighted rate-distortion achieved under such paradigm (Han et al., 2015a; Louizos function, which implicitly takes the impact of et al., 2017; Wiedemann et al., 2018a;b). Whereas much of quantization on to the accuracy of the network research has focused on the sparsification part, a substantially into account. Subsequently, it compresses the less amount have focused on improving the later two quantized values into a bitstream representation steps. In fact, most of the proposed (post-sparsity) compression with minimal redundancies. We show that Deep-algorithms come with at least one of the following CABAC is able to reach very high compression caveats: 1) they decouple the quantization procedure from ratios across a wide set of different network architectures the subsequent lossless compression algorithm, 2) ignore and datasets. For instance, we are correlations between the parameters and 3) apply a lossless able to compress by x63.6 the VGG16 ImageNet compression algorithm that produce a bitstream with more model with no loss of accuracy, thus being able to redundancies than principally needed (e.g.
arXiv.org Artificial Intelligence
May-15-2019
- Country:
- Europe > Germany (0.14)
- North America > United States
- California (0.14)
- Genre:
- Research Report (0.50)
- Technology: