Dynamic Black-box Backdoor Attacks on IoT Sensory Data

Chathoth, Ajesh Koyatan, Lee, Stephen

arXiv.org Artificial Intelligence 

Abstract--Sensor data-based recognition systems are widely used in various applications, such as gait-based authentication and human activity recognition (HAR). Modern wearable and smart devices feature various built-in Inertial Measurement Unit (IMU) sensors, and such sensor-based measurements can be fed to a machine learning-based model to train and classify human activities. While deep learning-based models have proven successful in classifying human activity and gestures, they pose various security risks. In our paper, we discuss a novel dynamic trigger-generation technique for performing black-box adversarial attacks on sensor data-based IoT systems. Our empirical analysis shows that the attack is successful on various datasets and classifier models with minimal perturbation on the input data. We also provide a detailed comparative analysis of performance and stealthiness to various other poisoning techniques found in backdoor attacks. We also discuss some adversarial defense mechanisms and their impact on the effectiveness of our trigger-generation technique. Smart devices, equipped with advanced sensors and connectivity, are enabling new and emerging applications in mobile sensing. From tracking physical activity to monitoring health conditions via gait analysis, these devices transform our interaction with our environments. For instance, by leveraging sensor data, these devices can recognize users or even diagnose health conditions [1], [2]. Meanwhile, recent advances in deep learning have significantly enhanced the accuracy and utility of smart device applications, driving increased research interest in their potential uses [3], [4]. As deep learning and sensor technologies continue to evolve, we expect more widespread use of deep learning in diverse smart device applications. Despite the popularity of deep learning, there is a growing security concern regarding its application [5], [6]. Deep neural network (DNN) models are particularly vulnerable to backdoor attacks, where attackers design specific triggers that cause the model to misclassify inputs containing those triggers.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found