IPTQ-ViT: Post-Training Quantization of Non-linear Functions for Integer-only Vision Transformers
Kim, Gihwan, Lee, Jemin, Kim, Hyungshin
–arXiv.org Artificial Intelligence
Previous Quantization-Aware Training (QAT) methods for vision transformers rely on expensive retraining to recover accuracy loss in non-linear layer quantization, limiting their use in resource-constrained environments. In contrast, existing Post-Training Quantization (PTQ) methods either partially quantize non-linear functions or adjust activation distributions to maintain accuracy but fail to achieve fully integer-only inference. In this paper, we introduce IPTQ-ViT, a novel PTQ framework for fully integer-only vision transformers without retraining. We present approximation functions: a polynomial-based GELU optimized for vision data and a bit-shifting-based Softmax designed to improve approximation accuracy in PTQ. In addition, we propose a unified metric integrating quantization sensitivity, perturbation, and computational cost to select the optimal approximation function per activation layer. IPTQ-ViT outperforms previous PTQ methods, achieving up to 6.44\%p (avg. 1.78\%p) top-1 accuracy improvement for image classification, 1.0 mAP for object detection. IPTQ-ViT outperforms partial floating-point PTQ methods under W8A8 and W4A8, and achieves accuracy and latency comparable to integer-only QAT methods. We plan to release our code https://github.com/gihwan-kim/IPTQ-ViT.git.
arXiv.org Artificial Intelligence
Nov-20-2025
- Country:
- Asia
- Myanmar > Tanintharyi Region
- Dawei (0.04)
- South Korea > Daejeon
- Daejeon (0.40)
- Myanmar > Tanintharyi Region
- Europe > Switzerland
- Asia
- Genre:
- Research Report (0.64)
- Technology: