Computing-In-Memory Dataflow for Minimal Buffer Traffic

Song, Choongseok, Jeong, Doo Seok

arXiv.org Artificial Intelligence 

--Computing-In-Memory (CIM) offers a potential solution to the memory wall issue and can achieve high energy efficiency by minimizing data movement, making it a promising architecture for edge AI devices. Lightweight models like MobileNet and EfficientNet, which utilize depthwise convolution for feature extraction, have been developed for these devices. However, CIM macros often face challenges in accelerating depth-wise convolution, including underutilization of CIM memory and heavy buffer traffic. The latter, in particular, has been overlooked despite its significant impact on latency and energy consumption. T o address this, we introduce a novel CIM dataflow that significantly reduces buffer traffic by maximizing data reuse and improving memory utilization during depthwise convolution. The proposed dataflow is grounded in solid theoretical principles, fully demonstrated in this paper . When applied to MobileNet and EfficientNet models, our dataflow reduces buffer traffic by 77.4-87.0%, Convolutional neural networks (CNNs) have achieved remarkable success in computer vision, excelling in spatial feature extraction [1].