Low-Precision Mixed-Computation Models for Inference on Edge
Azizi, Seyedarmin, Nazemi, Mahdi, Kamal, Mehdi, Pedram, Massoud
–arXiv.org Artificial Intelligence
This paper presents a mixed-computation neural network processing approach for edge applications that incorporates low-precision (low-width) Posit and low-precision fixed point (FixP) number systems. This mixed-computation approach employs 4-bit Posit (Posit4), which has higher precision around zero, for representing weights with high sensitivity, while it uses 4-bit FixP (FixP4) for representing other weights. A heuristic for analyzing the importance and the quantization error of the weights is presented to assign the proper number system to different weights. Additionally, a gradient approximation for Posit representation is introduced to improve the quality of weight updates in the backpropagation process. Due to the high energy consumption of the fully Posit-based computations, neural network operations are carried out in FixP or Posit/FixP. An efficient hardware implementation of a MAC operation with a first Posit operand and FixP for a second operand and accumulator is presented. The efficacy of the proposed low-precision mixed-computation approach is extensively assessed on vision and language models. The results show that, on average, the accuracy of the mixed-computation is about 1.5% higher than that of FixP with a cost of 0.19% energy overhead.
arXiv.org Artificial Intelligence
Dec-2-2023
- Country:
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- Genre:
- Research Report > New Finding (0.48)
- Industry:
- Energy (0.35)
- Technology: