xl 1chw
Entropy-DrivenMixed-PrecisionQuantizationfor DeepNetworkDesign: Appendix
Moreover, the entropyH represents the expressiveness of a deep system, which correlated with the performance of a deep neural network [19]. Note thatCl 1 is equal to 1 when the layer is a depth-wise convolution. According to the work of [19], the input of each layer is zero-mean distribution when deriving the entropy,so that the upper bound ofQisset as2N 1. As we set the quantization step as 1 in Eq. 10, the distribution ofR will be much smoother, and the probability will close to0. Since the Flash budget constrains the total weights of all network layers.
Technology: Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.34)