Goto

Collaborating Authors

 cryptosystem



SALSA: Attacking Lattice Cryptography with Transformers

Neural Information Processing Systems

Currently deployed public-key cryptosystems will be vulnerable to attacks by full-scale quantum computers. Consequently, quantum resistant cryptosystems are in high demand, and lattice-based cryptosystems, based on a hard problem known as Learning With Errors (LWE), have emerged as strong contenders for standardization. In this work, we train transformers to perform modular arithmetic and mix half-trained models and statistical cryptanalysis techniques to propose SALSA: a machine learning attack on LWE-based cryptographic schemes. SALSA can fully recover secrets for small-to-mid size LWE instances with sparse binary secrets, and may scale to attack real world LWE-based cryptosystems.





Encrypted Vector Similarity Computations Using Partially Homomorphic Encryption: Applications and Performance Analysis

Serengil, Sefik, Ozpinar, Alper

arXiv.org Artificial Intelligence

This paper explores the use of partially homomorphic encryption (PHE) for encrypted vector similarity search, with a focus on facial recognition and broader applications like reverse image search, recommendation engines, and large language models (LLMs). While fully homomorphic encryption (FHE) exists, we demonstrate that encrypted cosine similarity can be computed using PHE, offering a more practical alternative. Since PHE does not directly support cosine similarity, we propose a method that normalizes vectors in advance, enabling dot product calculations as a proxy. We also apply min-max normalization to handle negative dimension values. Experiments on the Labeled Faces in the Wild (LFW) dataset use DeepFace's FaceNet128d, FaceNet512d, and VGG-Face (4096d) models in a two-tower setup. Pre-encrypted embeddings are stored in one tower, while an edge device captures images, computes embeddings, and performs encrypted-plaintext dot products via additively homomorphic encryption. We implement this with LightPHE, evaluating Paillier, Damgard-Jurik, and Okamoto-Uchiyama schemes, excluding others due to performance or decryption complexity. Tests at 80-bit and 112-bit security (NIST-secure until 2030) compare PHE against FHE (via TenSEAL), analyzing encryption, decryption, operation time, cosine similarity loss, key/ciphertext sizes. Results show PHE is less computationally intensive, faster, and produces smaller ciphertexts/keys, making it well-suited for memory-constrained environments and real-world privacy-preserving encrypted similarity search.


SALSA: Attacking Lattice Cryptography with Transformers

Neural Information Processing Systems

Currently deployed public-key cryptosystems will be vulnerable to attacks by full-scale quantum computers. Consequently, "quantum resistant" cryptosystems are in high demand, and lattice-based cryptosystems, based on a hard problem known as Learning With Errors (LWE), have emerged as strong contenders for standardization. In this work, we train transformers to perform modular arithmetic and mix half-trained models and statistical cryptanalysis techniques to propose SALSA: a machine learning attack on LWE-based cryptographic schemes. SALSA can fully recover secrets for small-to-mid size LWE instances with sparse binary secrets, and may scale to attack real world LWE-based cryptosystems.


A Machine Learning-Based Secure Face Verification Scheme and Its Applications to Digital Surveillance

Wang, Huan-Chih, Wu, Ja-Ling

arXiv.org Artificial Intelligence

Face verification is a well-known image analysis application and is widely used to recognize individuals in contemporary society. However, most real-world recognition systems ignore the importance of protecting the identity-sensitive facial images that are used for verification. To address this problem, we investigate how to implement a secure face verification system that protects the facial images from being imitated. In our work, we use the DeepID2 convolutional neural network to extract the features of a facial image and an EM algorithm to solve the facial verification problem. To maintain the privacy of facial images, we apply homomorphic encryption schemes to encrypt the facial data and compute the EM algorithm in the ciphertext domain. We develop three face verification systems for surveillance (or entrance) control of a local community based on three levels of privacy concerns. The associated timing performances are presented to demonstrate their feasibility for practical implementation.


Global Outlier Detection in a Federated Learning Setting with Isolation Forest

Malpetti, Daniele, Azzimonti, Laura

arXiv.org Artificial Intelligence

Across several domains, it is common to find examples of data points that are local outliers but not Federated learning (FL) is a machine learning paradigm global outliers. For example, in the medical field, a given where multiple parties collaborate to train a shared machine medical condition may be common in one region and rare in learning model without centralizing data at a single location another [8]. Therefore, in a study conducted at a center located [1]. During model training, data holders refrain from directly in a low-prevalence region, individuals suffering from that exchanging raw data; instead, they share model parameters condition may appear as local outliers. However, if the center such as gradients, weights, or other forms of processed participates in a FL multicenter study including centers in information. This distributed learning paradigm is typically areas where the condition is more common, those individuals facilitated by a coordinating server, often referred to as the would not appear as global outliers. In most cases, for the aggregator, which collects local contributions from data holders, training of FL models, a consortium would be interested in commonly known as clients, and aggregates them to create discarding global outliers and retaining local ones.


Privacy-Preserving Distributed Nonnegative Matrix Factorization

Lari, Ehsan, Arablouei, Reza, Werner, Stefan

arXiv.org Artificial Intelligence

Nonnegative matrix factorization (NMF) is an effective data representation tool with numerous applications in signal processing and machine learning. However, deploying NMF in a decentralized manner over ad-hoc networks introduces privacy concerns due to the conventional approach of sharing raw data among network agents. To address this, we propose a privacy-preserving algorithm for fully-distributed NMF that decomposes a distributed large data matrix into left and right matrix factors while safeguarding each agent's local data privacy. It facilitates collaborative estimation of the left matrix factor among agents and enables them to estimate their respective right factors without exposing raw data. To ensure data privacy, we secure information exchanges between neighboring agents utilizing the Paillier cryptosystem, a probabilistic asymmetric algorithm for public-key cryptography that allows computations on encrypted data without decryption. Simulation results conducted on synthetic and real-world datasets demonstrate the effectiveness of the proposed algorithm in achieving privacy-preserving distributed NMF over ad-hoc networks.