APSQ: Additive Partial Sum Quantization with Algorithm-Hardware Co-Design
Tan, Yonghao, Dong, Pingcheng, Wu, Yongkun, Liu, Yu, Liu, Xuejiao, Luo, Peng, Liu, Shih-Yang, Huang, Xijie, Zhang, Dong, Liang, Luhong, Cheng, Kwang-Ting
–arXiv.org Artificial Intelligence
DNN accelerators, significantly advanced by model compression and specialized dataflow techniques, have marked considerable progress. However, the frequent access of high-precision partial sums (PSUMs) leads to excessive memory demands in architectures utilizing input/weight stationary dataflows. Traditional compression strategies have typically overlooked PSUM quantization, which may account for 69% of power consumption. This study introduces a novel Additive Partial Sum Quantization (APSQ) method, seamlessly integrating PSUM accumulation into the quantization framework. A grouping strategy that combines APSQ with PSUM quantization enhanced by a reconfigurable architecture is further proposed. The APSQ performs nearly lossless on NLP and CV tasks across BERT, Segformer, and EfficientViT models while compressing PSUMs to INT8. This leads to a notable reduction in energy costs by 28-87%. Extended experiments on LLaMA2-7B demonstrate the potential of APSQ for large language models. Code is available at https://github.com/Yonghao-Tan/APSQ.
arXiv.org Artificial Intelligence
May-8-2025
- Country:
- Africa > Ethiopia
- Addis Ababa > Addis Ababa (0.04)
- Asia > China
- Hong Kong (0.04)
- Africa > Ethiopia
- Genre:
- Research Report > New Finding (0.46)
- Technology: