MiCo: End-to-End Mixed Precision Neural Network Co-Exploration Framework for Edge AI

Jiang, Zijun, Lyu, Yangdi

arXiv.org Artificial Intelligence 

--Quantized Neural Networks (QNN) with extremely low-bitwidth data have proven promising in efficient storage and computation on edge devices. T o further reduce the accuracy drop while increasing speedup, layer-wise mixed-precision quantization (MPQ) becomes a popular solution. However, existing algorithms for exploring MPQ schemes are limited in flexibility and efficiency. Comprehending the complex impacts of different MPQ schemes on post-training quantization and quantization-aware training results is a challenge for conventional methods. Furthermore, an end-to-end framework for the optimization and deployment of MPQ models is missing in existing work. In this paper, we propose the MiCo framework, a holistic MPQ exploration and deployment framework for edge AI applications. The framework adopts a novel optimization algorithm to search for optimal quantization schemes with the highest accuracies while meeting latency constraints. Hardware-aware latency models are built for different hardware targets to enable fast explorations. After the exploration, the framework enables direct deployment from PyT orch MPQ models to bare-metal C codes, leading to end-to-end speedup with minimal accuracy drops. Tiny machine learning (ML) and edge artificial intelligence (AI) are becoming increasingly important and valuable in today's AI ecosystem. However, deploying AI models on edge devices is challenging due to the tight resource constraints.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found