SearchingforLow-BitWeightsin QuantizedNeuralNetworks

Neural Information Processing Systems 

However, the quantization functions used in most conventional quantization methods are non-differentiable, which increases the optimization difficulty ofquantized networks. Compared with full-precision parameters (i.e.,32-bit floating numbers), low-bit values areselected from amuch smaller set. For example, there are only 16 possibilities in 4-bit space. Thus, we present to regard the discrete weights in an arbitrary quantized neural network as searchable variables, and utilize a differential method to search them accurately. In particular, each weight is represented as a probability distribution over the discrete value set. The probabilities are optimized during training and the values with the highest probability are selected toestablish the desired quantizednetwork.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found