Custom Gradient Estimators are Straight-Through Estimators in Disguise
Schoenbauer, Matt, Moro, Daniele, Lew, Lukasz, Howard, Andrew
–arXiv.org Artificial Intelligence
Quantization-aware training comes with a fundamental challenge: the derivative of quantization functions such as rounding are zero almost everywhere and nonexistent elsewhere. Various differentiable approximations of quantization functions have been proposed to address this issue. In this paper, we prove that a large class of weight gradient estimators is approximately equivalent with the straight through estimator (STE). Specifically, after swapping in the STE and adjusting both the weight initialization and the learning rate in SGD, the model will train in almost exactly the same way as it did with the original gradient estimator. Moreover, we show that for adaptive learning rate algorithms like Adam, the same result can be seen without any modifications to the weight initialization and learning rate. These results reduce the burden of hyperparameter tuning for practitioners of QAT, as they can now confidently choose the STE for gradient estimation and ignore more complex gradient estimators. We experimentally show that these results hold for both a small convolutional model trained on the MNIST dataset and for a ResNet50 model trained on ImageNet.
arXiv.org Artificial Intelligence
May-22-2024
- Country:
- North America > United States > Maryland (0.14)
- Genre:
- Overview (0.46)
- Research Report (0.64)
- Technology: