Pau, Danilo
Quantitative Analysis of Deeply Quantized Tiny Neural Networks Robust to Adversarial Attacks
Zakariyya, Idris, Ayaz, Ferheen, Kharbouche-Harrari, Mounia, Singer, Jeremy, Keoh, Sye Loong, Pau, Danilo, Cano, José
Reducing the memory footprint of Machine Learning (ML) models, especially Deep Neural Networks (DNNs), is imperative to facilitate their deployment on resource-constrained edge devices. However, a notable drawback of DNN models lies in their susceptibility to adversarial attacks, wherein minor input perturbations can deceive them. A primary challenge revolves around the development of accurate, resilient, and compact DNN models suitable for deployment on resource-constrained edge devices. This paper presents the outcomes of a compact DNN model that exhibits resilience against both black-box and white-box adversarial attacks. This work has achieved this resilience through training with the QKeras quantization-aware training framework. The study explores the potential of QKeras and an adversarial robustness technique, Jacobian Regularization (JR), to co-optimize the DNN architecture through per-layer JR methodology. As a result, this paper has devised a DNN model employing this co-optimization strategy based on Stochastic Ternary Quantization (STQ). Its performance was compared against existing DNN models in the face of various white-box and black-box attacks. The experimental findings revealed that, the proposed DNN model had small footprint and on average, it exhibited better performance than Quanos and DS-CNN MLCommons/TinyML (MLC/T) benchmarks when challenged with white-box and black-box attacks, respectively, on the CIFAR-10 image and Google Speech Commands audio datasets.
Enhancing Field-Oriented Control of Electric Drives with Tiny Neural Network Optimized for Micro-controllers
Elele, Martin Joel Mouk, Pau, Danilo, Zhuang, Shixin, Facchinetti, Tullio
The deployment of neural networks on resource-constrained microcontrollers has gained momentum, driving many advancements in Tiny Neural Networks. This paper introduces a tiny feed-forward neural network, TinyFC, integrated into the Field-Oriented Control (FOC) of Permanent Magnet Synchronous Motors (PMSMs). Proportional-Integral (PI) controllers are widely used in FOC for their simplicity, although their limitations in handling nonlinear dynamics hinder precision. To address this issue, a lightweight 1,400 parameters TinyFC was devised to enhance the FOC performance while fitting into the computational and memory constraints of Figure 1: Workflow diagram to deploy NN-augmented FOC a micro-controller. Advanced optimization techniques, including pruning, hyperparameter tuning, and quantization to 8-bit integers, such as automotive, industrial, naval and aeronautics, where compact were applied to reduce the model's footprint while preserving the size and precision control are essential [19]. PMSMs consist of network effectiveness. Simulation results show the proposed approach a stator housing the windings and a rotor containing permanent significantly reduced overshoot by up to 87.5%, with the magnets. The operational interaction between the stator's rotating pruned model achieving complete overshoot elimination, highlighting magnetic field and the rotor's fixed magnetic field enables synchronization the potential of tiny neural networks in real-time motor control at synchronous speed [10].
Improving Robustness Against Adversarial Attacks with Deeply Quantized Neural Networks
Ayaz, Ferheen, Zakariyya, Idris, Cano, José, Keoh, Sye Loong, Singer, Jeremy, Pau, Danilo, Kharbouche-Harrari, Mounia
Reducing the memory footprint of Machine Learning (ML) models, particularly Deep Neural Networks (DNNs), is essential to enable their deployment into resource-constrained tiny devices. However, a disadvantage of DNN models is their vulnerability to adversarial attacks, as they can be fooled by adding slight perturbations to the inputs. Therefore, the challenge is how to create accurate, robust, and tiny DNN models deployable on resource-constrained embedded devices. This paper reports the results of devising a tiny DNN model, robust to adversarial black and white box attacks, trained with an automatic quantizationaware training framework, i.e. QKeras, with deep quantization loss accounted in the learning loop, thereby making the designed DNNs more accurate for deployment on tiny devices. We investigated how QKeras and an adversarial robustness technique, Jacobian Regularization (JR), can provide a co-optimization strategy by exploiting the DNN topology and the per layer JR approach to produce robust yet tiny deeply quantized DNN models. As a result, a new DNN model implementing this cooptimization strategy was conceived, developed and tested on three datasets containing both images and audio inputs, as well as compared its performance with existing benchmarks against various white-box and black-box attacks. Experimental results demonstrated that on average our proposed DNN model resulted in 8.3% and 79.5% higher accuracy than MLCommons/Tiny benchmarks in the presence of white-box and black-box attacks on the CIFAR-10 image dataset and a subset of the Google Speech Commands audio dataset respectively. It was also 6.5% more accurate for black-box attacks on the SVHN image dataset.