poisoning sample
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Asia > China (0.04)
- Media > Film (0.46)
- Information Technology (0.33)
- Asia > China > Hong Kong (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Asia > China > Shaanxi Province > Xi'an (0.04)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (0.68)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Asia > China (0.04)
- Information Technology > Security & Privacy (0.55)
- Media > Film (0.46)
A Membership Exposure Evaluated by a Stronger Attack In this section, we evaluate the membership exposure effect with a stronger membership inference attack proposed by [
Our experiment is conducted on the CIFAR-10 dataset. B.1 Case Study 1: Membership Exposure Across Different Feature Extractors We plot h across clean CIFAR-10 classifiers in Figure 6 . This is consistent with our results in Table 1 . We compute h for each target class and plot the average CDF. DP-SGD prevents models from learning from the poisoning samples.
- Asia > China > Hong Kong (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Asia > China > Shaanxi Province > Xi'an (0.04)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (0.68)
Defending Against Beta Poisoning Attacks in Machine Learning Models
Gulciftci, Nilufer, Gursoy, M. Emre
--Poisoning attacks, in which an attacker adversarially manipulates the training dataset of a machine learning (ML) model, pose a significant threat to ML security. Beta Poisoning is a recently proposed poisoning attack that disrupts model accuracy by making the training dataset linearly nonseparable. In this paper, we propose four defense strategies against Beta Poisoning attacks: kNN Proximity-Based Defense (KPB), Neighborhood Class Comparison (NCC), Clustering-Based Defense (CBD), and Mean Distance Threshold (MDT). The defenses are based on our observations regarding the characteristics of poisoning samples generated by Beta Poisoning, e.g., poisoning samples have close proximity to one another, and they are centered near the mean of the target class. Experimental evaluations using MNIST and CIF AR-10 datasets demonstrate that KPB and MDT can achieve perfect accuracy and F1 scores, while CBD and NCC also provide strong defensive capabilities. Furthermore, by analyzing performance across varying parameters, we offer practical insights regarding defenses' behaviors under varying conditions. Machine learning (ML) models have become integral components in various domains, including finance, healthcare, cy-bersecurity, and autonomous systems. However, the robustness and trustworthiness of ML models are frequently challenged by adversarial attacks [1]. Poisoning attacks constitute an important category of adversarial attacks, in which an attacker purposefully manipulates the training dataset to compromise the integrity of an ML model, e.g., degrade model accuracy or mislead its predictions [1], [2], [3].
- Europe > Middle East > Republic of Türkiye > Istanbul Province > Istanbul (0.04)
- Asia > Middle East > Republic of Türkiye > Istanbul Province > Istanbul (0.04)
- Information Technology > Security & Privacy (1.00)
- Government > Military (0.86)
A Semantic and Clean-label Backdoor Attack against Graph Convolutional Networks
Graph Convolutional Networks (GCNs) have shown excellent performance in graph-structured tasks such as node classification and graph classification. However, recent research has shown that GCNs are vulnerable to a new type of threat called the backdoor attack, where the adversary can inject a hidden backdoor into the GCNs so that the backdoored model performs well on benign samples, whereas its prediction will be maliciously changed to the attacker-specified target label if the hidden backdoor is activated by the attacker-defined trigger. Clean-label backdoor attack and semantic backdoor attack are two new backdoor attacks to Deep Neural Networks (DNNs), they are more imperceptible and have posed new and serious threats. The semantic and clean-label backdoor attack is not fully explored in GCNs. In this paper, we propose a semantic and clean-label backdoor attack against GCNs under the context of graph classification to reveal the existence of this security vulnerability in GCNs. Specifically, SCLBA conducts an importance analysis on graph samples to select one type of node as semantic trigger, which is then inserted into the graph samples to create poisoning samples without changing the labels of the poisoning samples to the attacker-specified target label. We evaluate SCLBA on multiple datasets and the results show that SCLBA can achieve attack success rates close to 99% with poisoning rates of less than 3%, and with almost no impact on the performance of model on benign samples.
- North America > United States (0.14)
- Asia > China > Shanghai > Shanghai (0.04)
Generating Poisoning Attacks against Ridge Regression Models with Categorical Features
Guedes-Ayala, Monse, Schewe, Lars, Suvak, Zeynep, Anjos, Miguel
Machine Learning (ML) models have become a very powerful tool to extract information from large datasets and use it to make accurate predictions and automated decisions. However, ML models can be vulnerable to external attacks, causing them to underperform or deviate from their expected tasks. One way to attack ML models is by injecting malicious data to mislead the algorithm during the training phase, which is referred to as a poisoning attack. We can prepare for such situations by designing anticipated attacks, which are later used for creating and testing defence strategies. In this paper, we propose an algorithm to generate strong poisoning attacks for a ridge regression model containing both numerical and categorical features that explicitly models and poisons categorical features. We model categorical features as SOS-1 sets and formulate the problem of designing poisoning attacks as a bilevel optimization problem that is nonconvex mixed-integer in the upper-level and unconstrained convex quadratic in the lower-level. We present the mathematical formulation of the problem, introduce a single-level reformulation based on the Karush-Kuhn-Tucker (KKT) conditions of the lower level, find bounds for the lower-level variables to accelerate solver performance, and propose a new algorithm to poison categorical features. Numerical experiments show that our method improves the mean squared error of all datasets compared to the previous benchmark in the literature.