adversarial machine
Busting the Paper Ballot: Voting Meets Adversarial Machine Learning
Mahmood, Kaleel, Manicke, Caleb, Rathbun, Ethan, Verma, Aayushi, Ahmad, Sohaib, Stamatakis, Nicholas, Michel, Laurent, Fuller, Benjamin
We show the security risk associated with using machine learning classifiers in United States election tabulators. The central classification task in election tabulation is deciding whether a mark does or does not appear on a bubble associated to an alternative in a contest on the ballot. Barretto et al. (E-Vote-ID 2021) reported that convolutional neural networks are a viable option in this field, as they outperform simple feature-based classifiers. Our contributions to election security can be divided into four parts. To demonstrate and analyze the hypothetical vulnerability of machine learning models on election tabulators, we first introduce four new ballot datasets. Second, we train and test a variety of different models on our new datasets. These models include support vector machines, convolutional neural networks (a basic CNN, VGG and ResNet), and vision transformers (Twins and CaiT). Third, using our new datasets and trained models, we demonstrate that traditional white box attacks are ineffective in the voting domain due to gradient masking. Our analyses further reveal that gradient masking is a product of numerical instability. We use a modified difference of logits ratio loss to overcome this issue (Croce and Hein, ICML 2020). Fourth, in the physical world, we conduct attacks with the adversarial examples generated using our new methods. In traditional adversarial machine learning, a high (50% or greater) attack success rate is ideal. However, for certain elections, even a 5% attack success rate can flip the outcome of a race. We show such an impact is possible in the physical domain. We thoroughly discuss attack realism, and the challenges and practicality associated with printing and scanning ballot adversarial examples.
- North America > United States > Rhode Island (0.76)
- Asia > Taiwan > Taiwan Province > Taipei (0.05)
- North America > United States > Connecticut > Tolland County > Storrs (0.04)
- (12 more...)
A Note on Implementation Errors in Recent Adaptive Attacks Against Multi-Resolution Self-Ensembles
This note documents an implementation issue in recent adaptive attacks (Zhang et al. [2024]) against the multi-resolution self-ensemble defense (Fort and Lakshminarayanan [2024]). The implementation allowed adversarial perturbations to exceed the standard $L_\infty = 8/255$ bound by up to a factor of 20$\times$, reaching magnitudes of up to $L_\infty = 160/255$. When attacks are properly constrained within the intended bounds, the defense maintains non-trivial robustness. Beyond highlighting the importance of careful validation in adversarial machine learning research, our analysis reveals an intriguing finding: properly bounded adaptive attacks against strong multi-resolution self-ensembles often align with human perception, suggesting the need to reconsider how we measure adversarial robustness.
A reading survey on adversarial machine learning: Adversarial attacks and their understanding
Deep Learning has empowered us to train neural networks for complex data with high performance. However, with the growing research, several vulnerabilities in neural networks have been exposed. A particular branch of research, Adversarial Machine Learning, exploits and understands some of the vulnerabilities that cause the neural networks to misclassify for near original input. A class of algorithms called adversarial attacks is proposed to make the neural networks misclassify for various tasks in different domains. With the extensive and growing research in adversarial attacks, it is crucial to understand the classification of adversarial attacks. This will help us understand the vulnerabilities in a systematic order and help us to mitigate the effects of adversarial attacks. This article provides a survey of existing adversarial attacks and their understanding based on different perspectives. We also provide a brief overview of existing adversarial defences and their limitations in mitigating the effect of adversarial attacks. Further, we conclude with a discussion on the future research directions in the field of adversarial machine learning.
- Asia > Middle East > Jordan (0.04)
- North America > United States > Massachusetts (0.04)
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.04)
- Asia > Japan > Kyūshū & Okinawa > Kyūshū > Fukuoka Prefecture > Fukuoka (0.04)
- Research Report (1.00)
- Overview (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Military (1.00)
Adversarial Machine Learning and Cybersecurity: Risks, Challenges, and Legal Implications
Musser, Micah, Lohn, Andrew, Dempsey, James X., Spring, Jonathan, Kumar, Ram Shankar Siva, Leong, Brenda, Liaghati, Christina, Martinez, Cindy, Grant, Crystal D., Rohrer, Daniel, Frase, Heather, Elliott, Jonathan, Bansemer, John, Rodriguez, Mikel, Regan, Mitt, Chowdhury, Rumman, Hermanek, Stefan
In July 2022, the Center for Security and Emerging Technology (CSET) at Georgetown University and the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center convened a workshop of experts to examine the relationship between vulnerabilities in artificial intelligence systems and more traditional types of software vulnerabilities. Topics discussed included the extent to which AI vulnerabilities can be handled under standard cybersecurity processes, the barriers currently preventing the accurate sharing of information about AI vulnerabilities, legal issues associated with adversarial attacks on AI systems, and potential areas where government support could improve AI vulnerability management and mitigation. Attendees at the workshop included industry representatives in both cybersecurity and AI red-teaming roles; academics with experience conducting adversarial machine learning research; legal specialists in cybersecurity regulation, AI liability, and computer-related criminal law; and government representatives with significant AI oversight responsibilities. This report is meant to accomplish two things. First, it provides a high-level discussion of AI vulnerabilities, including the ways in which they are disanalogous to other types of vulnerabilities, and the current state of affairs regarding information sharing and legal oversight of AI vulnerabilities. Second, it attempts to articulate broad recommendations as endorsed by the majority of participants at the workshop. These recommendations, categorized under four high-level topics, are as follows: 1. Topic: Extending Traditional Cybersecurity for AI Vulnerabilities 1.1. Recommendation: Organizations building or deploying AI models should use a risk management framework that addresses security throughout the AI system life cycle.
- Europe (0.14)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- (2 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military > Cyberwarfare (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Applied AI (1.00)
Adversarial machine learning: With artificial intelligence comes new types of attacks
Machines' ability to learn by processing data gleaned from sensors underlies automated vehicles, medical devices and a host of other emerging technologies. But that learning ability leaves systems vulnerable to hackers in unexpected ways, researchers at Princeton University have found. In a series of recent papers, a research team has explored how adversarial tactics applied to artificial intelligence (AI) could, for instance, trick a traffic-efficiency system into causing gridlock or manipulate a health-related AI application to reveal patients' private medical history. As an example of one such attack, the team altered a driving robot's perception of a road sign from a speed limit to a "Stop" sign, which could cause the vehicle to dangerously slam the brakes at highway speeds; in other examples, they altered Stop signs to be perceived as a variety of other traffic instructions. "If machine learning is the software of the future, we're at a very basic starting point for securing it," said Prateek Mittal, the lead researcher and an associate professor in the Department of Electrical Engineering at Princeton.
- Health & Medicine (1.00)
- Transportation (0.73)
- Information Technology > Security & Privacy (0.50)
Reinventing adversarial machine learning: adversarial ML from scratch
I think this might be a half-decent motivation! I want to explain why I think adversarial ML is so interesting. To give it context, let's start with a ludicrous party question: is a Pop-Tart a ravioli? Let's unpack why the question makes for a fun debate among friends. The question "is Chef Boyardee ravioli?" makes for less entertaining banter because we all agree (minus the occasional food snobs).
Adversarial Machine Learning Security Problems for 6G: mmWave Beam Prediction Use-Case
Catak, Evren, Catak, Ferhat Ozgur, Moldsvor, Arild
6G is the next generation for the communication systems. In recent years, machine learning algorithms have been applied widely in various fields such as health, transportation, and the autonomous car. The predictive algorithms will be used in 6G problems. With the rapid developments of deep learning techniques, it is critical to take the security concern into account to apply the algorithms. While machine learning offers significant advantages for 6G, AI models' security is ignored. Since it has many applications in the real world, security is a vital part of the algorithms. This paper has proposed a mitigation method for adversarial attacks against proposed 6G machine learning models for the millimeter-wave (mmWave) beam prediction with adversarial learning. The main idea behind adversarial attacks against machine learning models is to produce faulty results by manipulating trained deep learning models for 6G applications for mmWave beam prediction use case. We have also presented the adversarial learning mitigation method's performance for 6G security in millimeter-wave beam prediction application with fast gradient sign method attack. The mean square errors of the defended model and undefended model are very close.
- Europe > Norway (0.05)
- North America > United States (0.04)
- Information Technology > Security & Privacy (1.00)
- Transportation > Ground > Road (0.34)
EU report warns that AI makes autonomous vehicles 'highly vulnerable' to attack
The dream of autonomous vehicles is that they can avoid human error and save lives, but a new European Union Agency for Cybersecurity (ENISA) report has found that autonomous vehicles are "highly vulnerable to a wide range of attacks" that could be dangerous for passengers, pedestrians, and people in other vehicles. Attacks considered in the report include sensor attacks with beams of light, overwhelming object detection systems, back-end malicious activity, and adversarial machine learning attacks presented in training data or the physical world. "The attack might be used to make the AI'blind' for pedestrians by manipulating for instance the image recognition component in order to misclassify pedestrians. This could lead to havoc on the streets, as autonomous cars may hit pedestrians on the road or crosswalks," the report reads. "The absence of sufficient security knowledge and expertise among developers and system designers on AI cybersecurity is a major barrier that hampers the integration of security in the automotive sector."
- Europe (0.73)
- North America > United States > California > San Francisco County > San Francisco (0.06)
- Transportation (1.00)
- Information Technology > Security & Privacy (1.00)
- Automobiles & Trucks (1.00)
- (2 more...)
Adversarial Machine Learning Attacks on Condition-Based Maintenance Capabilities
Abadi, Hamidreza Habibollahi Najaf
Condition-based maintenance (CBM) strategies exploit machine learning models to assess the health status of systems based on the collected data from the physical environment, while machine learning models are vulnerable to adversarial attacks. A malicious adversary can manipulate the collected data to deceive the machine learning model and affect the CBM system's performance. Adversarial machine learning techniques introduced in the computer vision domain can be used to make stealthy attacks on CBM systems by adding perturbation to data to confuse trained models. The stealthy nature causes difficulty and delay in detection of the attacks. In this paper, adversarial machine learning in the domain of CBM is introduced. A case study shows how adversarial machine learning can be used to attack CBM capabilities. Adversarial samples are crafted using the Fast Gradient Sign method, and the performance of a CBM system under attack is investigated. The obtained results reveal that CBM systems are vulnerable to adversarial machine learning attacks and defense strategies need to be considered.
- North America > United States > Maryland > Prince George's County > College Park (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > Nevada > Clark County > Las Vegas (0.04)
- Asia > Middle East > Iraq > Najaf Governorate > Najaf (0.04)
- Information Technology > Security & Privacy (0.50)
- Government > Military (0.36)