Goto

Collaborating Authors

 puf


Federated Unlearning Made Practical: Seamless Integration via Negated Pseudo-Gradients

Mora, Alessio, Mazzocca, Carlo, Montanari, Rebecca, Bellavista, Paolo

arXiv.org Artificial Intelligence

Abstract--The right to be forgotten is a fundamental principle of privacy-preserving regulations and extends to Machine Learning (ML) paradigms such as Federated Learning (FL). While FL enhances privacy by enabling collaborative model training without sharing private data, trained models still retain the influence of training data. Federated Unlearning (FU) methods recently proposed often rely on impractical assumptions for real-world FL deployments, such as storing client update histories or requiring access to a publicly available dataset. T o address these constraints, this paper introduces a novel method that leverages negated Pseudo-gradients Updates for Federated Unlearning (PUF). Our approach only uses standard client model updates, which are employed during regular FL rounds, and interprets them as pseudo-gradients. When a client needs to be forgotten, we apply the negation of their pseudo-gradients, appropriately scaled, to the global model. Unlike state-of-the-art mechanisms, PUF seamlessly integrates with FL workflows, incurs no additional computational and communication overhead beyond standard FL rounds, and supports concurrent unlearning requests. We extensively evaluated the proposed method on two well-known benchmark image classification datasets (CIF AR-10 and CIF AR-100) and a real-world medical imaging dataset for segmentation (ProstateMRI), using three different neural architectures: two residual networks and a vision transformer . The experimental results across various settings demonstrate that PUF achieves state-of-the-art forgetting effectiveness and recovery time, without relying on any additional assumptions. N today's digital landscape, privacy has become a major concern, as reflected by the emergence of robust regulatory frameworks worldwide [1]. The European Union (EU) has consistently emphasized the importance of protecting personal data, exemplified by the introduction of the General Data Protection Regulation (GDPR) in 2016 [2]. Most recently, in May 2024, the EU enacted Regulation 2024/1183 [3], establishing the European Digital Identity Framework that empowers individuals with fine-grained control over their information. One of the key rights of these regulations is the right to be forgotten, which allows individuals to request the deletion of their previously shared data. Similar rights are central to other major privacy laws worldwide, such as the California Consumer Privacy Act (CCP A) [4] where the right to delete grants California residents the on-demand removal of personal data held by businesses. Alessio Mora, Rebecca Montanari, and Paolo Bellavista are with the Department of Computer Science and Engineering, University of Bologna, Bologna, Italy (e-mail: {name.surname}@unibo.it).


Emerging Paradigms for Securing Federated Learning Systems

Abouelmagd, Amr Akmal, Hilal, Amr

arXiv.org Artificial Intelligence

Federated Learning (FL) facilitates collaborative model training while keeping raw data decentralized, making it a conduit for leveraging the power of IoT devices while maintaining privacy of the locally collected data. However, existing privacy- preserving techniques present notable hurdles. Methods such as Multi-Party Computation (MPC), Homomorphic Encryption (HE), and Differential Privacy (DP) often incur high compu- tational costs and suffer from limited scalability. This survey examines emerging approaches that hold promise for enhancing both privacy and efficiency in FL, including Trusted Execution Environments (TEEs), Physical Unclonable Functions (PUFs), Quantum Computing (QC), Chaos-Based Encryption (CBE), Neuromorphic Computing (NC), and Swarm Intelligence (SI). For each paradigm, we assess its relevance to the FL pipeline, outlining its strengths, limitations, and practical considerations. We conclude by highlighting open challenges and prospective research avenues, offering a detailed roadmap for advancing secure and scalable FL systems.


Crypto-ncRNA: Non-coding RNA (ncRNA) Based Encryption Algorithm

Wang, Xu, Wang, Yiquan, Huang, Tin-yeh

arXiv.org Artificial Intelligence

A BSTRACT In the looming post-quantum era, traditional cryptographic systems are increasingly vulnerable to quantum computing attacks that can compromise their mathematical foundations. To address this critical challenge, we propose crypto-ncRNA--a bio-convergent cryptographic framework that leverages the dynamic folding properties of non-coding RNA (ncRNA) to generate high-entropy, quantum-resistant keys and produce unpredictable ciphertexts. The framework employs a novel, multi-stage process: encoding plaintext into RNA sequences, predicting and manipulating RNA secondary structures using advanced algorithms, and deriving cryptographic keys through the intrinsic physical unclonability of RNA molecules. Experimental evaluations indicate that, although cryptoncRNA's encryption speed is marginally lower than that of AES, it significantly outperforms RSA in terms of efficiency and scalability while achieving a 100% pass rate on the NIST SP 800-22 randomness tests. These results demonstrate that crypto-ncRNA offers a promising and robust approach for securing digital infrastructures against the evolving threats posed by quantum computing. Moreover, with the rapid advancement of artificial intelligence, RNA-based research has gradually unfolded into a new realm of innovation (Townshend et al. (2021)). Recent studies showed that the dynamic folding processes of RNA molecules intrinsically exhibit physical unclonable functions (PUFs) characteristics (Herder et al. (2014); Li et al. (2022); Luescher et al. (2024); Zhou et al. (2021)), thereby establishing a pathway for designing post-quantum cryptography (PQC) systems (Arapinis et al. (2021); Cambou et al. (2021)).


Designing Short-Stage CDC-XPUFs: Balancing Reliability, Cost, and Security in IoT Devices

Li, Gaoxiang, Zhuang, Yu

arXiv.org Artificial Intelligence

The rapid expansion of Internet of Things (IoT) devices demands robust and resource-efficient security solutions. Physically Unclonable Functions (PUFs), which generate unique cryptographic keys from inherent hardware variations, offer a promising approach. However, traditional PUFs like Arbiter PUFs (APUFs) and XOR Arbiter PUFs (XOR-PUFs) are susceptible to machine learning (ML) and reliability-based attacks. In this study, we investigate Component-Differentially Challenged XOR-PUFs (CDC-XPUFs), a less explored variant, to address these vulnerabilities. We propose an optimized CDC-XPUF design that incorporates a pre-selection strategy to enhance reliability and introduces a novel lightweight architecture to reduce hardware overhead. Rigorous testing demonstrates that our design significantly lowers resource consumption, maintains strong resistance to ML attacks, and improves reliability, effectively mitigating reliability-based attacks. These results highlight the potential of CDC-XPUFs as a secure and efficient candidate for widespread deployment in resource-constrained IoT systems.


A novel reliability attack of Physical Unclonable Functions

Li, Gaoxiang, Zhuang, Yu

arXiv.org Artificial Intelligence

Physical Unclonable Functions (PUFs) are emerging as promising security primitives for IoT devices, providing device fingerprints based on physical characteristics. Despite their strengths, PUFs are vulnerable to machine learning (ML) attacks, including conventional and reliability-based attacks. Conventional ML attacks have been effective in revealing vulnerabilities of many PUFs, and reliability-based ML attacks are more powerful tools that have detected vulnerabilities of some PUFs that are resistant to conventional ML attacks. Since reliability-based ML attacks leverage information of PUFs' unreliability, we were tempted to examine the feasibility of building defense using reliability enhancing techniques, and have discovered that majority voting with reasonably high repeats provides effective defense against existing reliability-based ML attack methods. It is known that majority voting reduces but does not eliminate unreliability, we are motivated to investigate if new attack methods exist that can capture the low unreliability of highly but not-perfectly reliable PUFs, which led to the development of a new reliability representation and the new representation-enabled attack method that has experimentally cracked PUFs enhanced with majority voting of high repetitions.


A Photonic Physically Unclonable Function's Resilience to Multiple-Valued Machine Learning Attacks

Henderson, Jessie M., Henderson, Elena R., Harper, Clayton A., Shahoei, Hiva, Oxford, William V., Larson, Eric C., MacFarlane, Duncan L., Thornton, Mitchell A.

arXiv.org Artificial Intelligence

Physically unclonable functions (PUFs) identify integrated circuits using nonlinearly-related challenge-response pairs (CRPs). Ideally, the relationship between challenges and corresponding responses is unpredictable, even if a subset of CRPs is known. Previous work developed a photonic PUF offering improved security compared to non-optical counterparts. Here, we investigate this PUF's susceptibility to Multiple-Valued-Logic-based machine learning attacks. We find that approximately 1,000 CRPs are necessary to train models that predict response bits better than random chance. Given the significant challenge of acquiring a vast number of CRPs from a photonic PUF, our results demonstrate photonic PUF resilience against such attacks.


Machine Learning Resistant Amorphous Silicon Physically Unclonable Functions (PUFs)

Kilic, Velat, Macfarlane, Neil, Stround, Jasper, Metais, Samuel, Alemohammad, Milad, Cooper, A. Brinton, Foster, Amy C., Foster, Mark A.

arXiv.org Artificial Intelligence

Many crypto protocols rely heavily on the security of keys that are stored in device memory and are susceptible to malware attacks. Physically unclonable functions (PUFs) have been proposed as an alternative [1] whose response to external stimuli (challenge) is determined by their microscopic structure, which is difficult to clone. PUF operation consists of two phases: enrollment and deployment. During the enrollment process, the manufacturer creates a challenge response pair (CRP) library by probing the device with unique binary challenges and measuring/generating the corresponding digitized response. The CRP data set is then stored for the deployment phase where the PUF device can be authenticated by probing it with a subset of challenges in the CRP data set and comparing the responses. To be a strong security primitive a PUF must exhibit behavior that is i) deterministic, ii) unpredictable, and iii) unique.


Polynomial Bounds for Learning Noisy Optical Physical Unclonable Functions and Connections to Learning With Errors

Albright, Apollo, Gelfand, Boris, Dixon, Michael

arXiv.org Artificial Intelligence

It is shown that a class of optical physical unclonable functions (PUFs) can be learned to arbitrary precision with arbitrarily high probability, even in the presence of noise, given access to polynomially many challenge-response pairs and polynomially bounded computational power, under mild assumptions about the distributions of the noise and challenge vectors. This extends the results of Rh\"uramir et al. (2013), who showed a subset of this class of PUFs to be learnable in polynomial time in the absence of noise, under the assumption that the optics of the PUF were either linear or had negligible nonlinear effects. We derive polynomial bounds for the required number of samples and the computational complexity of a linear regression algorithm, based on size parameters of the PUF, the distributions of the challenge and noise vectors, and the probability and accuracy of the regression algorithm, with a similar analysis to one done by Bootle et al. (2018), who demonstrated a learning attack on a poorly implemented version of the Learning With Errors problem.


Active learning for fast and slow modeling attacks on Arbiter PUFs

Dumoulin, Vincent, Rao, Wenjing, Devroye, Natasha

arXiv.org Artificial Intelligence

Modeling attacks, in which an adversary uses machine learning techniques to model a hardware-based Physically Unclonable Function (PUF) pose a great threat to the viability of these hardware security primitives. In most modeling attacks, a random subset of challenge-response-pairs (CRPs) are used as the labeled data for the machine learning algorithm. Here, for the arbiter-PUF, a delay based PUF which may be viewed as a linear threshold function with random weights (due to manufacturing imperfections), we investigate the role of active learning in Support Vector Machine (SVM) learning. We focus on challenge selection to help SVM algorithm learn ``fast'' and learn ``slow''. Our methods construct challenges rather than relying on a sample pool of challenges as in prior work. Using active learning to learn ``fast'' (less CRPs revealed, higher accuracies) may help manufacturers learn the manufactured PUFs more efficiently, or may form a more powerful attack when the attacker may query the PUF for CRPs at will. Using active learning to select challenges from which learning is ``slow'' (low accuracy despite a large number of revealed CRPs) may provide a basis for slowing down attackers who are limited to overhearing CRPs.


A Provably Secure Strong PUF based on LWE: Construction and Implementation

Xi, Xiaodan, Li, Ge, Wang, Ye, Jeon, Yeonsoo, Orshansky, Michael

arXiv.org Artificial Intelligence

We construct a strong PUF with provable security against ML attacks on both classical and quantum computers. The security is guaranteed by the cryptographic hardness of learning decryption functions of public-key cryptosystems, and the hardness of the learning-with-errors (LWE) problem defined on integer lattices. We call our construction the lattice PUF. We construct lattice PUF with a physically obfuscated key and an LWE decryption function block. To allow deployments in different scenarios, we demonstrate designs with different latency-area trade-offs. A compact design uses a highly serialized LFSR and LWE decryption function, while a latency-optimized design uses an unrolled LFSR and a parallel datapath. In addition to theoretical security guarantee, we evaluate empirical resistance to the various leading ML techniques: the prediction error remains above 49.76% after 1 million training CRPs. The resource-efficient design requires only 45 slices for the PUF logic proper, and 351 slices for a fuzzy extractor. The latency-optimized design achieves a 148X reduction in latency, at a 10X increase in PUF hardware utilization. The mean uniformity of PUF responses is 49.98%, the mean uniqueness is 50.00%, and the mean reliability is 1.26%. ILICON physical unclonable functions (PUFs) are security primitives commonly adopted for device identification, authentication, and cryptographic key generation [38]. A PUF exploits the inherent randomness of CMOS technology to generate an output response for a given input challenge. Weak PUFs, which are also called physically obfuscated keys (POKs) [15], have a limited challenge-response pair (CRP) space. The security of a strong PUF requires the associated CRPs to be unpredictable: given a certain set of known CRPs, it should be hard to predict the unobserved CRPs, even with the most powerful machine learning (ML) based modeling attacks. Engineering an ML resistant strong PUF with low hardware cost has been challenging. Variants of the original arbiter PUF (APUF) with stronger modeling attack resilience, including bistable ring PUF and feedforward APUF, have also been broken via ML attacks [37], [35]. The recent interpose PUF (IPUF) [33] proposal is claimed to have provable ML resistance.