Mireshghallah, Fatemehsadat
Privacy-Preserving In-Context Learning with Differentially Private Few-Shot Generation
Tang, Xinyu, Shin, Richard, Inan, Huseyin A., Manoel, Andre, Mireshghallah, Fatemehsadat, Lin, Zinan, Gopi, Sivakanth, Kulkarni, Janardhan, Sim, Robert
We study the problem of in-context learning (ICL) with large language models (LLMs) on private datasets. This scenario poses privacy risks, as LLMs may leak or regurgitate the private examples demonstrated in the prompt. We propose a novel algorithm that generates synthetic few-shot demonstrations from the private dataset with formal differential privacy (DP) guarantees, and show empirically that it can achieve effective ICL. We conduct extensive experiments on standard benchmarks and compare our algorithm with non-private ICL and zero-shot solutions. Our results demonstrate that our algorithm can achieve competitive performance with strong privacy levels. The emergence of in-context learning (ICL) with large language models (LLMs), popularized by the seminal work of Brown et al. (2020), has revolutionized the field of natural language processing and machine learning; see Dong et al. (2023) for a survey on ICL and the references therein. In-context learning involves downstream task adaptation without modifying a pre-trained model's weights. This is achieved by conditioning the model through a series of demonstrations of the task at hand appended as a prompt. An advantage of ICL is that it offers a cost-effective and adaptable alternative to finetuning LLMs. By leveraging the model's pre-trained knowledge, it enables efficient generalization across tasks, allows for quick adaptation to new domains or concepts, and requires only a handful of labeled examples for adaptation. However, privacy is a concern when deploying LLMs with users' data incorporated into prompts. As an example, consider healthcare AI applications, where clinical reports belonging to the patients may be used as demonstrations to provide relevant context to the LLM to answer queries. A malicious adversary might attempt to circumvent API restrictions through jailbreaking thereby gaining direct access to the demonstrations as depicted in Figure 1. More generally, it is a major concern that LLMs may regurgitate prompt data in their output (Priyanshu et al., 2023; Duan et al., 2023; Wang et al., 2023). These scenarios raise privacy risks regarding the data used for constructing the prompt.
LatticeGen: A Cooperative Framework which Hides Generated Text in a Lattice for Privacy-Aware Generation on Cloud
Zhang, Mengke, He, Tianxing, Wang, Tianle, Mi, Lu, Mireshghallah, Fatemehsadat, Chen, Binyi, Wang, Hao, Tsvetkov, Yulia
In the current user-server interaction paradigm of prompted generation with large language models (LLM) on cloud, the server fully controls the generation process, which leaves zero options for users who want to keep the generated text to themselves. We propose LatticeGen, a cooperative framework in which the server still handles most of the computation while the user controls the sampling operation. The key idea is that the true generated sequence is mixed with noise tokens by the user and hidden in a noised lattice. Considering potential attacks from a hypothetically malicious server and how the user can defend against it, we propose the repeated beam-search attack and the mixing noise scheme. In our experiments we apply LatticeGen to protect both prompt and generation. It is shown that while the noised lattice degrades generation quality, LatticeGen successfully protects the true generation to a remarkable degree under strong attacks (more than 50% of the semantic remains hidden as measured by BERTScore).
Membership Inference Attacks against Language Models via Neighbourhood Comparison
Mattern, Justus, Mireshghallah, Fatemehsadat, Jin, Zhijing, Schölkopf, Bernhard, Sachan, Mrinmaya, Berg-Kirkpatrick, Taylor
Membership Inference attacks (MIAs) aim to predict whether a data sample was present in the training data of a machine learning model or not, and are widely used for assessing the privacy risks of language models. Most existing attacks rely on the observation that models tend to assign higher probabilities to their training samples than non-training points. However, simple thresholding of the model score in isolation tends to lead to high false-positive rates as it does not account for the intrinsic complexity of a sample. Recent work has demonstrated that reference-based attacks which compare model scores to those obtained from a reference model trained on similar data can substantially improve the performance of MIAs. However, in order to train reference models, attacks of this kind make the strong and arguably unrealistic assumption that an adversary has access to samples closely resembling the original training data. Therefore, we investigate their performance in more realistic scenarios and find that they are highly fragile in relation to the data distribution used to train reference models. To investigate whether this fragility provides a layer of safety, we propose and evaluate neighbourhood attacks, which compare model scores for a given sample to scores of synthetically generated neighbour texts and therefore eliminate the need for access to the training data distribution. We show that, in addition to being competitive with reference-based attacks that have perfect knowledge about the training data distribution, our attack clearly outperforms existing reference-free attacks as well as reference-based attacks with imperfect knowledge, which demonstrates the need for a reevaluation of the threat model of adversarial attacks.
Privacy-Preserving Domain Adaptation of Semantic Parsers
Mireshghallah, Fatemehsadat, Su, Yu, Hashimoto, Tatsunori, Eisner, Jason, Shin, Richard
To mitigate that problem, Differentially Private In task-oriented dialogue systems, such as Siri and (DP) training algorithms, such as DP-SGD (Abadi Alexa, a software agent parses a user's intent into et al., 2016; Dwork et al., 2006), can be used to a program, executes it and then communicates the provide worst-case guarantees on the information results back to the user (Andreas et al., 2020; Li leakage of a trained model. This guarantee is et al., 2022; Cheng et al., 2020; Gupta et al., 2018; controlled by the privacy budget ϵ, where lower Young et al., 2013). As a result of their growing epsilon means higher privacy. But while DP-SGD popularity, these systems face an increasing could be used to adapt (fine-tune) a semantic parser demand to improve their linguistic coverage (How on unannotated private data, there is a limit to what do users talk?) as well as functional coverage can be done in this way. Even if some users are (What are users trying to do?). An input utterance asking the system to hop up and down, fine-tuning to such a system could look like this: "Could you is unlikely to make it grow legs.
Are Chatbots Ready for Privacy-Sensitive Applications? An Investigation into Input Regurgitation and Prompt-Induced Sanitization
Priyanshu, Aman, Vijay, Supriti, Kumar, Ayush, Naidu, Rakshit, Mireshghallah, Fatemehsadat
LLM-powered chatbots are becoming widely adopted in applications such as healthcare, personal assistants, industry hiring decisions, etc. In many of these cases, chatbots are fed sensitive, personal information in their prompts, as samples for in-context learning, retrieved records from a database, or as part of the conversation. The information provided in the prompt could directly appear in the output, which might have privacy ramifications if there is sensitive information there. As such, in this paper, we aim to understand the input copying and regurgitation capabilities of these models during inference and how they can be directly instructed to limit this copying by complying with regulations such as HIPAA and GDPR, based on their internal knowledge of them. More specifically, we find that when ChatGPT is prompted to summarize cover letters of a 100 candidates, it would retain personally identifiable information (PII) verbatim in 57.4% of cases, and we find this retention to be non-uniform between different subgroups of people, based on attributes such as gender identity. We then probe ChatGPT's perception of privacy-related policies and privatization mechanisms by directly instructing it to provide compliant outputs and observe a significant omission of PII from output.
Smaller Language Models are Better Black-box Machine-Generated Text Detectors
Mireshghallah, Fatemehsadat, Mattern, Justus, Gao, Sicun, Shokri, Reza, Berg-Kirkpatrick, Taylor
With the advent of fluent generative language models that can produce convincing utterances very similar to those written by humans, distinguishing whether a piece of text is machine-generated or human-written becomes more challenging and more important, as such models could be used to spread misinformation, fake news, fake reviews and to mimic certain authors and figures. To this end, there have been a slew of methods proposed to detect machine-generated text. Most of these methods need access to the logits of the target model or need the ability to sample from the target. One such black-box detection method relies on the observation that generated text is locally optimal under the likelihood function of the generator, while human-written text is not. We find that overall, smaller and partially-trained models are better universal text detectors: they can more precisely detect text generated from both small and larger models. Interestingly, we find that whether the detector and generator were trained on the same data is not critically important to the detection success. For instance the OPT-125M model has an AUC of 0.81 in detecting ChatGPT generations, whereas a larger model from the GPT family, GPTJ-6B, has AUC of 0.45.
Non-Parametric Temporal Adaptation for Social Media Topic Classification
Mireshghallah, Fatemehsadat, Vogler, Nikolai, He, Junxian, Florez, Omar, El-Kishky, Ahmed, Berg-Kirkpatrick, Taylor
User-generated social media data is constantly changing as new trends influence online discussion and personal information is deleted due to privacy concerns. However, most current NLP models are static and rely on fixed training data, which means they are unable to adapt to temporal change -- both test distribution shift and deleted training data -- without frequent, costly re-training. In this paper, we study temporal adaptation through the task of longitudinal hashtag prediction and propose a non-parametric dense retrieval technique, which does not require re-training, as a simple but effective solution. In experiments on a newly collected, publicly available, year-long Twitter dataset exhibiting temporal distribution shift, our method improves by 64.12% over the best parametric baseline without any of its costly gradient-based updating. Our dense retrieval approach is also particularly well-suited to dynamically deleted user data in line with data privacy laws, with negligible computational cost and performance loss.
FLUTE: A Scalable, Extensible Framework for High-Performance Federated Learning Simulations
Garcia, Mirian Hipolito, Manoel, Andre, Diaz, Daniel Madrigal, Mireshghallah, Fatemehsadat, Sim, Robert, Dimitriadis, Dimitrios
In this paper we introduce "Federated Learning Utilities and Tools for Experimentation" (FLUTE), a high-performance open-source platform for federated learning research and offline simulations. The goal of FLUTE is to enable rapid prototyping and simulation of new federated learning algorithms at scale, including novel optimization, privacy, and communications strategies. We describe the architecture of FLUTE, enabling arbitrary federated modeling schemes to be realized. We compare the platform with other state-of-the-art platforms and describe available features of FLUTE for experimentation in core areas of active research, such as optimization, privacy, and scalability. A comparison with other established platforms shows speed-ups of up to 42x and savings in memory footprint of 3x. A sample of the platform capabilities is also presented for a range of tasks, as well as other functionality, such as linear scaling for the number of participating clients, and a variety of federated optimizers, including FedAdam, DGA, etcetera.
Memorization in NLP Fine-tuning Methods
Mireshghallah, Fatemehsadat, Uniyal, Archit, Wang, Tianhao, Evans, David, Berg-Kirkpatrick, Taylor
Large language models are shown to present privacy risks through memorization of training data, and several recent works have studied such risks for the pre-training phase. Little attention, however, has been given to the fine-tuning phase and it is not well understood how different fine-tuning methods (such as fine-tuning the full model, the model head, and adapter) compare in terms of memorization risk. This presents increasing concern as the "pre-train and fine-tune" paradigm proliferates. In this paper, we empirically study memorization of fine-tuning methods using membership inference and extraction attacks, and show that their susceptibility to attacks is very different. We observe that fine-tuning the head of the model has the highest susceptibility to attacks, whereas fine-tuning smaller adapters appears to be less vulnerable to known extraction attacks.
What Does it Mean for a Language Model to Preserve Privacy?
Brown, Hannah, Lee, Katherine, Mireshghallah, Fatemehsadat, Shokri, Reza, Tramèr, Florian
Natural language reflects our private lives and identities, making its privacy concerns as broad as those of real life. Language models lack the ability to understand the context and sensitivity of text, and tend to memorize phrases present in their training sets. An adversary can exploit this tendency to extract training data. Depending on the nature of the content and the context in which this data was collected, this could violate expectations of privacy. Thus there is a growing interest in techniques for training language models that preserve privacy. In this paper, we discuss the mismatch between the narrow assumptions made by popular data protection techniques (data sanitization and differential privacy), and the broadness of natural language and of privacy as a social norm. We argue that existing protection methods cannot guarantee a generic and meaningful notion of privacy for language models. We conclude that language models should be trained on text data which was explicitly produced for public use.