Exploring Secure Machine Learning Through Payload Injection and FGSM Attacks on ResNet-50
Yadav, Umesh, Niraula, Suman, Gupta, Gaurav Kumar, Yadav, Bicky
–arXiv.org Artificial Intelligence
As ML models continue to integrate into critical cybersecurity In the modern landscape of cybersecurity, machine learning systems, the ability to exploit these models through (ML) models, especially in areas like image classification, adversarial techniques poses significant threats. A study predicts are increasingly integrated into systems where robustness and that by 2025, 30% of cyberattacks will involve adversarial security are paramount. However, these models are highly machine-learning tactics[16]. Pre-trained models are susceptible to adversarial attacks, where small, crafted perturbations susceptible to perturbations adversarial attacks, which can can lead to incorrect predictions and, in more severe undermine trust in AI systems due to the lack of customized cases, unauthorized access or manipulation of systems[2].
arXiv.org Artificial Intelligence
Jan-3-2025
- Genre:
- Research Report (1.00)
- Industry:
- Government > Military
- Cyberwarfare (0.89)
- Information Technology > Security & Privacy (1.00)
- Government > Military
- Technology: