Collaborating Authors

MetaPoison: Practical General-purpose Clean-label Data Poisoning Artificial Intelligence

Data poisoning--the process by which an attacker takes control of a model by making imperceptible changes to a subset of the training data--is an emerging threat in the context of neural networks. Existing attacks for data poisoning have relied on hand-crafted heuristics. Instead, we pose crafting poisons more generally as a bi-level optimization problem, where the inner level corresponds to training a network on a poisoned dataset and the outer level corresponds to updating those poisons to achieve a desired behavior on the trained model. We then propose MetaPoison, a first-order method to solve this optimization quickly. MetaPoison is effective: it outperforms previous clean-label poisoning methods by a large margin under the same setting. MetaPoison is robust: its poisons transfer to a variety of victims with unknown hyperparameters and architectures. MetaPoison is also general-purpose, working not only in fine-tuning scenarios, but also for end-to-end training from scratch with remarkable success, e.g. causing a target image to be misclassified 90% of the time via manipulating just 1% of the dataset. Additionally, MetaPoison can achieve arbitrary adversary goals not previously possible--like using poisons of one class to make a target image don the label of another arbitrarily chosen class. Finally, MetaPoison works in the real-world. We demonstrate successful data poisoning of models trained on Google Cloud AutoML Vision. Code and premade poisons are provided at

Transferable Clean-Label Poisoning Attacks on Deep Neural Nets Machine Learning

Clean-label poisoning attacks inject innocuous looking (and "correctly" labeled) poison images into training data, causing a model to misclassify a targeted image after being trained on this data. We consider transferable poisoning attacks that succeed without access to the victim network's outputs, architecture, or (in some cases) training data. To achieve this, we propose a new "polytope attack" in which poison images are designed to surround the targeted image in feature space. We also demonstrate that using Dropout during poison creation helps to enhance transferability of this attack. We achieve transferable attack success rates of over 50% while poisoning only 1% of the training set.

Defending against Adversarial Denial-of-Service Attacks Artificial Intelligence

Data poisoning is one of the most relevant security threats against machine learning and data-driven technologies. Since many applications rely on untrusted training data, an attacker can easily craft malicious samples and inject them into the training dataset to degrade the performance of machine learning models. As recent work has shown, such Denial-of-Service (DoS) data poisoning attacks are highly effective. To mitigate this threat, we propose a new approach of detecting DoS poisoned instances. In comparison to related work, we deviate from clustering and anomaly detection based approaches, which often suffer from the curse of dimensionality and arbitrary anomaly threshold selection. Rather, our defence is based on extracting information from the training data in such a generalized manner that we can identify poisoned samples based on the information present in the unpoisoned portion of the data. We evaluate our defence against two DoS poisoning attacks and seven datasets, and find that it reliably identifies poisoned instances. In comparison to related work, our defence improves false positive / false negative rates by at least 50%, often more.

Russia Expels 60 American Diplomats in Escalating Standoff Over Poison Attack in U.K.


The Kremlin announced it will expel 60 American diplomats and close the American Consulate in St. Petersburg as part of a widely anticipated diplomatic reprisal for the expulsion of Russian diplomats from two dozen Western countries in response to the poisoning of a former Russian spy in the United Kingdom earlier this month. The Russian government denies it was behind the March 4 poison attack in Salisbury, England, that has left Sergei Skripal and his daughter, Yulia Skripal, in the hospital. "As for the other countries, everything will also be symmetrical in terms of the number of people from their diplomatic missions who will be leaving Russia," Russian Foreign Minister Sergei Lavrov said. The U.S. expelled 60 Russian diplomats and closed the country's Seattle consulate earlier this week. "The message that is being sent is you cannot use a military-grade nerve agent on the streets of Salisbury against a British citizen and his daughter without a response," U.S. Ambassador Jon Huntsman said.

Russian spy: Deadline for Moscow over spy poison attack

BBC News

Moscow faces a deadline of midnight tonight to explain why a Russian-made nerve agent was used in the poisoning of former Russian agent Sergei Skripal and his daughter. The PM said it was "highly likely" Russia was responsible for the attack in Salisbury, Wiltshire, last Sunday. US Secretary of State Rex Tillerson said it appeared the "really egregious act... clearly came from Russia" and there should be "serious consequences". Moscow called the claims "unfounded". Home Secretary Amber Rudd will chair a meeting of the government's emergencies committee Cobra later to discuss the case.