Leveraging Residue Number System for Designing High-Precision Analog Deep Neural Network Accelerators
Demirkiran, Cansu, Agrawal, Rashmi, Reddi, Vijay Janapa, Bunandar, Darius, Joshi, Ajay
–arXiv.org Artificial Intelligence
--Achieving high accuracy, while maintaining good energy efficiency, in analog DNN accelerators is challenging as high-precision data converters are expensive. In this paper, we overcome this challenge by using the residue number system (RNS) to compose high-precision operations from multiple low-precision operations. This enables us to eliminate the information loss caused by the limited precision of the ADCs. Our study shows that RNS can achieve 99% FP32 accuracy for state-of-the-art DNN inference using data converters with only 6-bit precision. We propose using redundant RNS to achieve a fault-tolerant analog accelerator . In addition, we show that RNS can reduce the energy consumption of the data converters within an analog accelerator by several orders of magnitude compared to a regular fixed-point approach. Deep Neural Networks (DNNs) are commonly used today in a variety of applications including financial, healthcare, and transportation. The pervasive usage of these DNN models, whose sizes are continuously increasing, forces us to use more compute, communication, and memory resources. Unfortunately, with Moore's Law and Dennard Scaling slowing down [1], we can no longer rely on technology scaling.
arXiv.org Artificial Intelligence
Jun-15-2023
- Country:
- Africa > Chad
- Salamat (0.04)
- North America > United States
- California > Santa Clara County > Palo Alto (0.04)
- Africa > Chad
- Genre:
- Research Report (1.00)
- Industry:
- Semiconductors & Electronics (0.66)
- Technology: