Shamsabadi, Ali Shahin
OATH: Efficient and Flexible Zero-Knowledge Proofs of End-to-End ML Fairness
Franzese, Olive, Shamsabadi, Ali Shahin, Haddadi, Hamed
Though there is much interest in fair AI systems, the problem of fairness noncompliance -- which concerns whether fair models are used in practice -- has received lesser attention. Zero-Knowledge Proofs of Fairness (ZKPoF) address fairness noncompliance by allowing a service provider to verify to external parties that their model serves diverse demographics equitably, with guaranteed confidentiality over proprietary model parameters and data. They have great potential for building public trust and effective AI regulation, but no previous techniques for ZKPoF are fit for real-world deployment. We present OATH, the first ZKPoF framework that is (i) deployably efficient with client-facing communication comparable to in-the-clear ML as a Service query answering, and an offline audit phase that verifies an asymptotically constant quantity of answered queries, (ii) deployably flexible with modularity for any score-based classifier given a zero-knowledge proof of correct inference, (iii) deployably secure with an end-to-end security model that guarantees confidentiality and fairness across training, inference, and audits. We show that OATH obtains strong robustness against malicious adversaries at concretely efficient parameter settings. Notably, OATH provides a 1343x improvement to runtime over previous work for neural network ZKPoF, and scales up to much larger models -- even DNNs with tens of millions of parameters.
Context-Aware Membership Inference Attacks against Pre-trained Large Language Models
Chang, Hongyan, Shamsabadi, Ali Shahin, Katevas, Kleomenis, Haddadi, Hamed, Shokri, Reza
To assess memorization and information leakage in models, Membership Inference Attacks (MIAs) aim to determine if a data point was part of a model's training set [1]. However, MIAs designed for pre-trained Large Language Models (LLMs) have been largely ineffective [2, 3]. This is primarily because these MIAs, originally developed for classification models, fail to account for the sequential nature of LLMs. Unlike classification models, which produce a single prediction, LLMs generate text token-by-token, adjusting each prediction based on the context of preceding tokens (i.e., prefix). Prior MIAs overlook token-level loss dynamics and the influence of prefixes on next-token predictability, which contributes to memorization.
Identifying and Mitigating Privacy Risks Stemming from Language Models: A Survey
Smith, Victoria, Shamsabadi, Ali Shahin, Ashurst, Carolyn, Weller, Adrian
Rapid advancements in language models (LMs) have led to their adoption across many sectors. Alongside the potential benefits, such models present a range of risks, including around privacy. In particular, as LMs have grown in size, the potential to memorise aspects of their training data has increased, resulting in the risk of leaking private information. As LMs become increasingly widespread, it is vital that we understand such privacy risks and how they might be mitigated. To help researchers and policymakers understand the state of knowledge around privacy attacks and mitigations, including where more work is needed, we present the first technical survey on LM privacy. We (i) identify a taxonomy of salient dimensions where attacks differ on LMs, (ii) survey existing attacks and use our taxonomy of dimensions to highlight key trends, (iii) discuss existing mitigation strategies, highlighting their strengths and limitations, identifying key gaps and demonstrating open problems and areas for concern.
Tubes Among Us: Analog Attack on Automatic Speaker Identification
Ahmed, Shimaa, Wani, Yash, Shamsabadi, Ali Shahin, Yaghini, Mohammad, Shumailov, Ilia, Papernot, Nicolas, Fawaz, Kassem
Recent years have seen a surge in the popularity of acoustics-enabled personal devices powered by machine learning. Yet, machine learning has proven to be vulnerable to adversarial examples. A large number of modern systems protect themselves against such attacks by targeting artificiality, i.e., they deploy mechanisms to detect the lack of human involvement in generating the adversarial examples. However, these defenses implicitly assume that humans are incapable of producing meaningful and targeted adversarial examples. In this paper, we show that this base assumption is wrong. In particular, we demonstrate that for tasks like speaker identification, a human is capable of producing analog adversarial examples directly with little cost and supervision: by simply speaking through a tube, an adversary reliably impersonates other speakers in eyes of ML models for speaker identification. Our findings extend to a range of other acoustic-biometric tasks such as liveness detection, bringing into question their use in security-critical settings in real life, such as phone banking.
When the Curious Abandon Honesty: Federated Learning Is Not Private
Boenisch, Franziska, Dziedzic, Adam, Schuster, Roei, Shamsabadi, Ali Shahin, Shumailov, Ilia, Papernot, Nicolas
In federated learning (FL), data does not leave personal devices when they are jointly training a machine learning model. Instead, these devices share gradients, parameters, or other model updates, with a central party (e.g., a company) coordinating the training. Because data never "leaves" personal devices, FL is often presented as privacy-preserving. Yet, recently it was shown that this protection is but a thin facade, as even a passive, honest-but-curious attacker observing gradients can reconstruct data of individual users contributing to the protocol. In this work, we show a novel data reconstruction attack which allows an active and dishonest central party to efficiently extract user data from the received gradients. While prior work on data reconstruction in FL relies on solving computationally expensive optimization problems or on making easily detectable modifications to the shared model's architecture or parameters, in our attack the central party makes inconspicuous changes to the shared model's weights before sending them out to the users. We call the modified weights of our attack trap weights. Our active attacker is able to recover user data perfectly, i.e., with zero error, even when this data stems from the same class. Recovery comes with near-zero costs: the attack requires no complex optimization objectives. Instead, our attacker exploits inherent data leakage from model gradients and simply amplifies this effect by maliciously altering the weights of the shared model through the trap weights. These specificities enable our attack to scale to fully-connected and convolutional deep neural networks trained with large mini-batches of data. For example, for the high-dimensional vision dataset ImageNet, we perfectly reconstruct more than 50% of the training data points from mini-batches as large as 100 data points.
Reconstructing Individual Data Points in Federated Learning Hardened with Differential Privacy and Secure Aggregation
Boenisch, Franziska, Dziedzic, Adam, Schuster, Roei, Shamsabadi, Ali Shahin, Shumailov, Ilia, Papernot, Nicolas
Federated learning (FL) is a framework for users to jointly train a machine learning model. FL is promoted as a privacy-enhancing technology (PET) that provides data minimization: data never "leaves" personal devices and users share only model updates with a server (e.g., a company) coordinating the distributed training. While prior work showed that in vanilla FL a malicious server can extract users' private data from the model updates, in this work we take it further and demonstrate that a malicious server can reconstruct user data even in hardened versions of the protocol. More precisely, we propose an attack against FL protected with distributed differential privacy (DDP) and secure aggregation (SA). Our attack method is based on the introduction of sybil devices that deviate from the protocol to expose individual users' data for reconstruction by the server. The underlying root cause for the vulnerability to our attack is a power imbalance: the server orchestrates the whole protocol and users are given little guarantees about the selection of other users participating in the protocol. Moving forward, we discuss requirements for privacy guarantees in FL. We conclude that users should only participate in the protocol when they trust the server or they apply local primitives such as local DP, shifting power away from the server. Yet, the latter approaches come at significant overhead in terms of performance degradation of the trained model, making them less likely to be deployed in practice.
Private Multi-Winner Voting for Machine Learning
Dziedzic, Adam, Choquette-Choo, Christopher A, Dullerud, Natalie, Suriyakumar, Vinith Menon, Shamsabadi, Ali Shahin, Kaleem, Muhammad Ahmad, Jha, Somesh, Papernot, Nicolas, Wang, Xiao
Private multi-winner voting is the task of revealing $k$-hot binary vectors satisfying a bounded differential privacy (DP) guarantee. This task has been understudied in machine learning literature despite its prevalence in many domains such as healthcare. We propose three new DP multi-winner mechanisms: Binary, $\tau$, and Powerset voting. Binary voting operates independently per label through composition. $\tau$ voting bounds votes optimally in their $\ell_2$ norm for tight data-independent guarantees. Powerset voting operates over the entire binary vector by viewing the possible outcomes as a power set. Our theoretical and empirical analysis shows that Binary voting can be a competitive mechanism on many tasks unless there are strong correlations between labels, in which case Powerset voting outperforms it. We use our mechanisms to enable privacy-preserving multi-label learning in the central setting by extending the canonical single-label technique: PATE. We find that our techniques outperform current state-of-the-art approaches on large, real-world healthcare data and standard multi-label benchmarks. We further enable multi-label confidential and private collaborative (CaPC) learning and show that model performance can be significantly improved in the multi-site setting.
GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation
Sajadmanesh, Sina, Shamsabadi, Ali Shahin, Bellet, Aurรฉlien, Gatica-Perez, Daniel
In this paper, we study the problem of learning Graph Neural Networks (GNNs) with Differential Privacy (DP). We propose a novel differentially private GNN based on Aggregation Perturbation (GAP), which adds stochastic noise to the GNN's aggregation function to statistically obfuscate the presence of a single edge (edge-level privacy) or a single node and all its adjacent edges (node-level privacy). Tailored to the specifics of private learning, GAP's new architecture is composed of three separate modules: (i) the encoder module, where we learn private node embeddings without relying on the edge information; (ii) the aggregation module, where we compute noisy aggregated node embeddings based on the graph structure; and (iii) the classification module, where we train a neural network on the private aggregations for node classification without further querying the graph edges. GAP's major advantage over previous approaches is that it can benefit from multi-hop neighborhood aggregations, and guarantees both edge-level and node-level DP not only for training, but also at inference with no additional costs beyond the training's privacy budget. We analyze GAP's formal privacy guarantees using R\'enyi DP and conduct empirical experiments over three real-world graph datasets. We demonstrate that GAP offers significantly better accuracy-privacy trade-offs than state-of-the-art DP-GNN approaches and naive MLP-based baselines. Our code is publicly available at https://github.com/sisaman/GAP.
Deep Private-Feature Extraction
Osia, Seyed Ali, Taheri, Ali, Shamsabadi, Ali Shahin, Katevas, Kleomenis, Haddadi, Hamed, Rabiee, Hamid R.
We present and evaluate Deep Private-Feature Extractor (DPFE), a deep model which is trained and evaluated based on information theoretic constraints. Using the selective exchange of information between a user's device and a service provider, DPFE enables the user to prevent certain sensitive information from being shared with a service provider, while allowing them to extract approved information using their model. We introduce and utilize the log-rank privacy, a novel measure to assess the effectiveness of DPFE in removing sensitive information and compare different models based on their accuracy-privacy tradeoff. We then implement and evaluate the performance of DPFE on smartphones to understand its complexity, resource demands, and efficiency tradeoffs. Our results on benchmark image datasets demonstrate that under moderate resource utilization, DPFE can achieve high accuracy for primary tasks while preserving the privacy of sensitive features.
Distributed One-class Learning
Shamsabadi, Ali Shahin, Haddadi, Hamed, Cavallaro, Andrea
We propose a cloud-based filter trained to block third parties from uploading privacy-sensitive images of others to online social media. The proposed filter uses Distributed One-Class Learning, which decomposes the cloud-based filter into multiple one-class classifiers. Each one-class classifier captures the properties of a class of privacy-sensitive images with an autoencoder. The multi-class filter is then reconstructed by combining the parameters of the one-class autoencoders. The training takes place on edge devices (e.g. smartphones) and therefore users do not need to upload their private and/or sensitive images to the cloud. A major advantage of the proposed filter over existing distributed learning approaches is that users cannot access, even indirectly, the parameters of other users. Moreover, the filter can cope with the imbalanced and complex distribution of the image content and the independent probability of addition of new users. We evaluate the performance of the proposed distributed filter using the exemplar task of blocking a user from sharing privacy-sensitive images of other users. In particular, we validate the behavior of the proposed multi-class filter with non-privacy-sensitive images, the accuracy when the number of classes increases, and the robustness to attacks when an adversary user has access to privacy-sensitive images of other users.