Goto

Collaborating Authors

Weight Poisoning Attacks on Pre-trained Models

arXiv.org Machine Learning

Recently, NLP has seen a surge in the usage of large pre-trained models. Users download weights of models pre-trained on large datasets, then fine-tune the weights on a task of their choice. This raises the question of whether downloading untrusted pre-trained weights can pose a security threat. In this paper, we show that it is possible to construct ``weight poisoning'' attacks where pre-trained weights are injected with vulnerabilities that expose ``backdoors'' after fine-tuning, enabling the attacker to manipulate the model prediction simply by injecting an arbitrary keyword. We show that by applying a regularization method, which we call RIPPLe, and an initialization procedure, which we call Embedding Surgery, such attacks are possible even with limited knowledge of the dataset and fine-tuning procedure. Our experiments on sentiment classification, toxicity detection, and spam detection show that this attack is widely applicable and poses a serious threat. Finally, we outline practical defenses against such attacks. Code to reproduce our experiments is available at https://github.com/neulab/RIPPLe.


Voice trigger detection from LVCSR hypothesis lattices using bidirectional lattice recurrent neural networks

arXiv.org Machine Learning

We propose a method to reduce false voice triggers of a speech-enabled personal assistant by post-processing the hypothesis lattice of a server-side large-vocabulary continuous speech recognizer (LVCSR) via a neural network. We first discuss how an estimate of the posterior probability of the trigger phrase can be obtained from the hypothesis lattice using known techniques to perform detection, then investigate a statistical model that processes the lattice in a more explicitly data-driven, discriminative manner. We propose using a Bidirectional Lattice Recurrent Neural Network (LatticeRNN) for the task, and show that it can significantly improve detection accuracy over using the 1-best result or the posterior.


Evolutionary Trigger Set Generation for DNN Black-Box Watermarking

arXiv.org Machine Learning

The commercialization of deep learning creates a compelling need for intellectual property (IP) protection. Deep neural network (DNN) watermarking has been proposed as a promising tool to help model owners prove ownership and fight piracy. A popular approach of watermarking is to train a DNN to recognize images with certain \textit{trigger} patterns. In this paper, we propose a novel evolutionary algorithm-based method to generate and optimize trigger patterns. Our method brings a siginificant reduction in false positive rates, leading to compelling proof of ownership. At the same time, it maintains the robustness of the watermark against attacks. We compare our method with the prior art and demonstrate its effectiveness on popular models and datasets.


Bypassing Backdoor Detection Algorithms in Deep Learning

arXiv.org Machine Learning

Deep learning models are known to be vulnerable to various adversarial manipulations of the training data, model parameters, and input data. In particular, an adversary can modify the training data and model parameters to embed backdoors into the model, so the model behaves according to the adversary's objective if the input contains the backdoor features (e.g., a stamp on an image). The poisoned model's behavior on clean data, however, remains unchanged. Many detection algorithms are designed to detect backdoors on input samples or model activation functions, in order to remove the backdoor. These algorithms rely on the statistical difference between the latent representations of backdoor-enabled and clean input data in the poisoned model. In this paper, we design an adversarial backdoor embedding algorithm that can bypass the existing detection algorithms including the state-of-the-art techniques (published in IEEE S\&P 2019 and NeurIPS 2018). We design a strategic adversarial training that optimizes the original loss function of the model, and also maximizes the indistinguishability of the hidden representations of poisoned data and clean data. We show the effectiveness of our attack on multiple datasets and model architectures. This work calls for designing adversary-aware defense mechanisms for backdoor detection algorithms.


Scale Up Event Extraction Learning via Automatic Training Data Generation

AAAI Conferences

The task of event extraction has long been investigated in a supervised learning paradigm, which is bound by the number and the quality of the training instances. Existing training data must be manually generated through a combination of expert domain knowledge and extensive human involvement. However, due to drastic efforts required in annotating text, the resultant datasets are usually small, which severally affects the quality of the learned model, making it hard to generalize. Our work develops an automatic approach for generating training data for event extraction. Our approach allows us to scale up event extraction training instances from thousands to hundreds of thousands, and it does this at a much lower cost than a manual approach. We achieve this by employing distant supervision to automatically create event annotations from unlabelled text using existing structured knowledge bases or tables.We then develop a neural network model with post inference to transfer the knowledge extracted from structured knowledge bases to automatically annotate typed events with corresponding arguments in text.We evaluate our approach by using the knowledge extracted from Freebase to label texts from Wikipedia articles. Experimental results show that our approach can generate a large number of highquality training instances. We show that this large volume of training data not only leads to a better event extractor, but also allows us to detect multiple typed events.