ml attack
SALSA VERDE: a machine learning attack on LWE with sparse small secrets
Learning with Errors (LWE) is a hard math problem used in post-quantum cryptography. Homomorphic Encryption (HE) schemes rely on the hardness of the LWE problem for their security, and two LWE-based cryptosystems were recently standardized by NIST for digital signatures and key exchange (KEM). Thus, it is critical to continue assessing the security of LWE and specific parameter choices. For example, HE uses secrets with small entries, and the HE community has considered standardizing small sparse secrets to improve efficiency and functionality. However, prior work, SALSA and PICANTE, showed that ML attacks can recover sparse binary secrets. Building on these, we propose VERDE, an improved ML attack that can recover sparse binary, ternary, and narrow Gaussian secrets.
Designing Short-Stage CDC-XPUFs: Balancing Reliability, Cost, and Security in IoT Devices
The rapid expansion of Internet of Things (IoT) devices demands robust and resource-efficient security solutions. Physically Unclonable Functions (PUFs), which generate unique cryptographic keys from inherent hardware variations, offer a promising approach. However, traditional PUFs like Arbiter PUFs (APUFs) and XOR Arbiter PUFs (XOR-PUFs) are susceptible to machine learning (ML) and reliability-based attacks. In this study, we investigate Component-Differentially Challenged XOR-PUFs (CDC-XPUFs), a less explored variant, to address these vulnerabilities. We propose an optimized CDC-XPUF design that incorporates a pre-selection strategy to enhance reliability and introduces a novel lightweight architecture to reduce hardware overhead. Rigorous testing demonstrates that our design significantly lowers resource consumption, maintains strong resistance to ML attacks, and improves reliability, effectively mitigating reliability-based attacks. These results highlight the potential of CDC-XPUFs as a secure and efficient candidate for widespread deployment in resource-constrained IoT systems.
- North America > United States > Texas > Lubbock County > Lubbock (0.04)
- North America > United States > Massachusetts (0.04)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Public Health (1.00)
Salsa Fresca: Angular Embeddings and Pre-Training for ML Attacks on Learning With Errors
Stevens, Samuel, Wenger, Emily, Li, Cathy, Nolte, Niklas, Saxena, Eshika, Charton, François, Lauter, Kristin
Learning with Errors (LWE) is a hard math problem underlying recently standardized post-quantum cryptography (PQC) systems for key exchange and digital signatures. Prior work proposed new machine learning (ML)-based attacks on LWE problems with small, sparse secrets, but these attacks require millions of LWE samples to train on and take days to recover secrets. We propose three key methods -- better preprocessing, angular embeddings and model pre-training -- to improve these attacks, speeding up preprocessing by $25\times$ and improving model sample efficiency by $10\times$. We demonstrate for the first time that pre-training improves and reduces the cost of ML attacks on LWE. Our architecture improvements enable scaling to larger-dimension LWE problems: this work is the first instance of ML attacks recovering sparse binary secrets in dimension $n=1024$, the smallest dimension used in practice for homomorphic encryption applications of LWE where sparse binary secrets are proposed.
- North America > United States > Ohio (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
ALMOST: Adversarial Learning to Mitigate Oracle-less ML Attacks via Synthesis Tuning
Chowdhury, Animesh Basak, Alrahis, Lilas, Collini, Luca, Knechtel, Johann, Karri, Ramesh, Garg, Siddharth, Sinanoglu, Ozgur, Tan, Benjamin
Oracle-less machine learning (ML) attacks have broken various logic locking schemes. Regular synthesis, which is tailored for area-power-delay optimization, yields netlists where key-gate localities are vulnerable to learning. Thus, we call for security-aware logic synthesis. We propose ALMOST, a framework for adversarial learning to mitigate oracle-less ML attacks via synthesis tuning. ALMOST uses a simulated-annealing-based synthesis recipe generator, employing adversarially trained models that can predict state-of-the-art attacks' accuracies over wide ranges of recipes and key-gate localities. Experiments on ISCAS benchmarks confirm the attacks' accuracies drops to around 50\% for ALMOST-synthesized circuits, all while not undermining design optimization.
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- North America > United States > New York (0.04)
- North America > Canada > Alberta > Census Division No. 6 > Calgary Metropolitan Region > Calgary (0.04)
Machine Learning in 2022: Data Threats and Backdoors?
Machine-learning algorithms have become a critical part of cybersecurity technology, currently used to identify malware, winnow down the number of alerts presented to security analysts, and prioritize vulnerabilities for patching. Yet such systems could be subverted by knowledgeable attackers in the future, warn experts studying the security of machine-learning (ML) and artificial-intelligence (AI) systems. In a study published last year, researchers found that the redundant properties of neural networks could allow an attacker to hide data within a common neural network file, consuming 20% of the file size without dramatically affecting the performance of the model. In another paper from 2019, researchers showed that a compromised training service could create a backdoor in a neural network that actually persists, even if the network is trained to another task. While these two specific research papers show potential threats, the most immediate risk are attacks that steal or modify data, says Gary McGraw, co-founder and CEO of the Berryville Institute of Machine Learning (BIML).
When I train a model for days...
I study a PhD in Security within Machine Learning and this is actually an extremely dangerous thing with nearly all DNN models due to how they 'see' data and is used within many ML attacks. DNN's don't see the world as we do (Obviously) but more importantly that means images or data can appear exactly the same to us, but to a DNN be completely different.You can imagine a scenario where a DNN within a autonomous car can be easily tricked to misclassify road signs. To us, a readable STOP sign with always say STOP, even if it has scratches, and dirt on the sign, we can easily interpret what the sign should be telling us. However an attacker can use noise (Similar to the photo of another road sign) to alter the image in tiny ways to cause a DNN to think a STOP sign is actually just a speed limit sign, while to us it still looks exactly like a STOP sign. Deploy such an attack on a self driving car at a junction with a stop sign and you can imagine how the car would simply drive on rather than stopping. You'll be surprised how easy it is to trick AI, even big companies like YouTube's have issues with this within copyright music detection if you perform complex ML attacks upon the music.Here's a paper similar to the scenario I described but by placing stickers in specific places to make an AI not see stop signs; https://arxiv.org/pdf/1707.08945.pdf - _Waldy_
- Transportation > Ground > Road (0.98)
- Transportation > Passenger (0.61)
- Information Technology > Robotics & Automation (0.61)
Physically Unclonable Functions and AI: Two Decades of Marriage
The current chapter aims at establishing a relationship between artificial intelligence (AI) and hardware security. Such a connection between AI and software security has been confirmed and well-reviewed in the relevant literature. The main focus here is to explore the methods borrowed from AI to assess the security of a hardware primitive, namely physically unclonable functions (PUFs), which has found applications in cryptographic protocols, e.g., authentication and key generation. Metrics and procedures devised for this are further discussed. Moreover, By reviewing PUFs designed by applying AI techniques, we give insight into future research directions in this area.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Massachusetts (0.04)
- Asia > Middle East > Iran > Tehran Province > Tehran (0.04)