Accelerate Intermittent Deep Inference
–arXiv.org Artificial Intelligence
More recently, have to execute intermittently. Communicating with remote contemporary trends focus on making the Deep Neural servers requires significantly more energy than local computation Net (DNN) Models runnable on battery-less intermittent or sensing, which has led to the development of on-device devices. One of the approaches is to shrink the DNN models intelligence and the execution of deep neural network (DNN) by enabling weight sharing, pruning, and conducted inference on intermittent systems. Neural architecture search Neural Architecture Search (NAS) with optimized search (NAS) techniques have been developed to automatically find space to target specific edge devices [2] [8] [7] [9]. Another highly accurate neural networks that can efficiently execute approach analyzes the intermittent execution and designs on deployed systems. With the increasing demand for deployment the corresponding system by performing NAS that is aware on battery-less edge devices, intermittent-aware neural of intermittent execution cycles and resource constraints architecture search is becoming crucial. DNN inference under intermittent power requires accumulative However, the optimized NAS was only considering consecutive execution across power cycles, as ambient power execution with no power loss, and intermittent is typically unstable and too weak for continuous execution.
arXiv.org Artificial Intelligence
Jul-1-2024
- Country:
- North America > United States
- California > Riverside County
- Riverside (0.04)
- Texas (0.05)
- California > Riverside County
- North America > United States
- Genre:
- Research Report (0.40)
- Industry:
- Energy > Power Industry (0.49)
- Technology: