1a638db8311430c6c018bf21e1a0b7fb-AuthorFeedback.pdf

Neural Information Processing Systems 

Even if the instance norms are not 1, one can often assume they are known8 because that only requires storing one real number per instance. With known9 norms, LM quantization is essentially the same, that is, we quantize data by10 scalingthequantizer according tothenormofeachvector. For uniform quantizer, we set the largest finite20 boarders equal to corresponding LM quantizer to make fair comparison. It should be now clear that "compressed learning" is a popular25 topic inthepast10years, withmanygood papersinpremier conference proceedings andjournals, forawide range26 of applications: similarity search, clustering, classification, regression, etc. We agree the definitions in the literature are not always consistent. Weagree with46 Reviewer 3 that one should be more consistent with the definitions.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found