Leveraging Residue Number System for Designing High-Precision Analog Deep Neural Network Accelerators

Demirkiran, Cansu, Agrawal, Rashmi, Reddi, Vijay Janapa, Bunandar, Darius, Joshi, Ajay

arXiv.org Artificial Intelligence 

--Achieving high accuracy, while maintaining good energy efficiency, in analog DNN accelerators is challenging as high-precision data converters are expensive. In this paper, we overcome this challenge by using the residue number system (RNS) to compose high-precision operations from multiple low-precision operations. This enables us to eliminate the information loss caused by the limited precision of the ADCs. Our study shows that RNS can achieve 99% FP32 accuracy for state-of-the-art DNN inference using data converters with only 6-bit precision. We propose using redundant RNS to achieve a fault-tolerant analog accelerator . In addition, we show that RNS can reduce the energy consumption of the data converters within an analog accelerator by several orders of magnitude compared to a regular fixed-point approach. Deep Neural Networks (DNNs) are commonly used today in a variety of applications including financial, healthcare, and transportation. The pervasive usage of these DNN models, whose sizes are continuously increasing, forces us to use more compute, communication, and memory resources. Unfortunately, with Moore's Law and Dennard Scaling slowing down [1], we can no longer rely on technology scaling.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found