Supplementary Material StreamNet: Memory-Efficient Streaming Tiny Deep Learning Inference on the Microcontroller
–Neural Information Processing Systems
Figure 1: The system architecture of StreamNet TensorFlow Lite for Microcontrollers (TFLM) (1) tailors for the TinyML applications and adopts the interpreter-based approach to make cross-platform interoperability in the embedded system possible. However, TFLM's interpreter increases the performance overhead of the TinyML applications on MCUs. Unlike TFLM, StreamNet and MCUNetv2 replace the interpreter with a code generator. StreamNet is built on top of MCUNetv2 (2; 3) and adds the feature of the 1D and 2D stream processing (4; 5; 6; 7; 8; 9; 10). The code generator of StreamNet produces kernel implementations with fixed parameters at the compile time.
Neural Information Processing Systems
May-24-2025, 22:58:15 GMT
- Technology: