NVR: Vector Runahead on NPUs for Sparse Memory Access
Wang, Hui, Zhao, Zhengpeng, Wang, Jing, Du, Yushu, Cheng, Yuan, Guo, Bing, Xiao, He, Ma, Chenhao, Han, Xiaomeng, You, Dean, Guan, Jiapeng, Wei, Ran, Yang, Dawei, Jiang, Zhe
–arXiv.org Artificial Intelligence
--Deep Neural Networks are increasingly leveraging sparsity to reduce the scaling up of model parameter size. However, reducing wall-clock time through sparsity and pruning remains challenging due to irregular memory access patterns, leading to frequent cache misses. In this paper, we present NPU V ector Runahead (NVR), a prefetching mechanism tailored for NPUs to address cache miss problems in sparse DNN workloads. NVR provides a general micro-architectural solution for sparse DNN workloads without requiring compiler or algorithmic support, operating as a decoupled, speculative, lightweight hardware sub-thread alongside the NPU, with minimal hardware overhead (under 5%). NVR achieves an average 90% reduction in cache misses compared to SOT A prefetching in general-purpose processors, delivering 4x average speedup on sparse workloads versus NPUs without prefetching. Moreover, we investigate the advantages of incorporating a small cache (16KB) into the NPU combined with NVR. Our evaluation shows that expanding this modest cache delivers 5x higher performance benefits than increasing the L2 cache size by the same amount. Fortunately, these workloads are typically over-parameterised [3], where up to 90% of parameters in prevalent models can be pruned while maintaining comparable performance [4]. This redundancy presents an opportunity to leverage sparsity to reduce such intensive resource demands. Theoretically, more fine-grained sparsity patterns yield higher acceleration by skipping more zero-valued elements.
arXiv.org Artificial Intelligence
Mar-17-2025
- Country:
- Asia (0.28)
- North America > United States
- Massachusetts (0.14)
- Genre:
- Research Report (0.50)
- Technology: