Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training
Lazzaro, Dario, Cinà, Antonio Emanuele, Pintor, Maura, Demontis, Ambra, Biggio, Battista, Roli, Fabio, Pelillo, Marcello
–arXiv.org Artificial Intelligence
Deep learning models undergo a significant increase in the number of parameters they possess, leading to the execution of a larger number of operations during inference. This expansion significantly contributes to higher energy consumption and prediction latency. In this work, we propose EAT, a gradient-based algorithm that aims to reduce energy consumption during model training. To this end, we leverage a differentiable approximation of the $\ell_0$ norm, and use it as a sparse penalty over the training loss. Through our experimental analysis conducted on three datasets and two deep neural networks, we demonstrate that our energy-aware training algorithm EAT is able to train networks with a better trade-off between classification performance and energy efficiency.
arXiv.org Artificial Intelligence
Jul-1-2023