Goto

Collaborating Authors

 Wang, Run


Offload Rethinking by Cloud Assistance for Efficient Environmental Sound Recognition on LPWANs

arXiv.org Artificial Intelligence

Learning-based environmental sound recognition has emerged as a crucial method for ultra-low-power environmental monitoring in biological research and city-scale sensing systems. These systems usually operate under limited resources and are often powered by harvested energy in remote areas. Recent efforts in on-device sound recognition suffer from low accuracy due to resource constraints, whereas cloud offloading strategies are hindered by high communication costs. In this work, we introduce ORCA, a novel resource-efficient cloud-assisted environmental sound recognition system on batteryless devices operating over the Low-Power Wide-Area Networks (LPWANs), targeting wide-area audio sensing applications. We propose a cloud assistance strategy that remedies the low accuracy of on-device inference while minimizing the communication costs for cloud offloading. By leveraging a self-attention-based cloud sub-spectral feature selection method to facilitate efficient on-device inference, ORCA resolves three key challenges for resource-constrained cloud offloading over LPWANs: 1) high communication costs and low data rates, 2) dynamic wireless channel conditions, and 3) unreliable offloading. We implement ORCA on an energy-harvesting batteryless microcontroller and evaluate it in a real world urban sound testbed. Our results show that ORCA outperforms state-of-the-art methods by up to $80 \times$ in energy savings and $220 \times$ in latency reduction while maintaining comparable accuracy.


SSL-Auth: An Authentication Framework by Fragile Watermarking for Pre-trained Encoders in Self-supervised Learning

arXiv.org Artificial Intelligence

Self-supervised learning (SSL), a paradigm harnessing unlabeled datasets to train robust encoders, has recently witnessed substantial success. These encoders serve as pivotal feature extractors for downstream tasks, demanding significant computational resources. Nevertheless, recent studies have shed light on vulnerabilities in pre-trained encoders, including backdoor and adversarial threats. Safeguarding the intellectual property of encoder trainers and ensuring the trustworthiness of deployed encoders pose notable challenges in SSL. To bridge these gaps, we introduce SSL-Auth, the first authentication framework designed explicitly for pre-trained encoders. SSL-Auth leverages selected key samples and employs a well-trained generative network to reconstruct watermark information, thus affirming the integrity of the encoder without compromising its performance. By comparing the reconstruction outcomes of the key samples, we can identify any malicious alterations. Comprehensive evaluations conducted on a range of encoders and diverse downstream tasks demonstrate the effectiveness of our proposed SSL-Auth.


Hard Adversarial Example Mining for Improving Robust Fairness

arXiv.org Artificial Intelligence

Adversarial training (AT) is widely considered the stateof-the-art Various approaches have been proposed to enhance the technique for improving the robustness of deep defense capabilities of DNNs against AEs. Adversarial neural networks (DNNs) against adversarial examples training (AT) has been demonstrated to be one of the (AE). Nevertheless, recent studies have revealed that adversarially most effective strategies [11]. Nevertheless, recent research trained models are prone to unfairness problems, [26, 23] have observed that the adversarially trained models restricting their applicability. In this paper, we empirically usually suffer from a serious unfairness problem, i.e., observe that this limitation may be attributed to serious adversarial there is a noticeable disparity in accuracy between different confidence overfitting, i.e., certain adversarial examples classes, seriously restricting their applicability in real-world with overconfidence. To alleviate this problem, we scenarios. Although some solutions have been proposed, propose HAM, a straightforward yet effective framework via the average robustness fairness score is still low and needs adaptive Hard Adversarial example Mining. HAM concentrates to be urgently addressed. On the other hand, several recent on mining hard adversarial examples while discarding studies [29, 17, 25] have focused on achieving efficient adversarial the easy ones in an adaptive fashion.


A Novel Verifiable Fingerprinting Scheme for Generative Adversarial Networks

arXiv.org Artificial Intelligence

This paper presents a novel fingerprinting scheme for the Intellectual Property (IP) protection of Generative Adversarial Networks (GANs). Prior solutions for classification models adopt adversarial examples as the fingerprints, which can raise stealthiness and robustness problems when they are applied to the GAN models. Our scheme constructs a composite deep learning model from the target GAN and a classifier. Then we generate stealthy fingerprint samples from this composite model, and register them to the classifier for effective ownership verification. This scheme inspires three concrete methodologies to practically protect the modern GAN models. Theoretical analysis proves that these methods can satisfy different security requirements necessary for IP protection. We also conduct extensive experiments to show that our solutions outperform existing strategies in terms of stealthiness, functionality-preserving and unremovability.