A Study of Black Box Adversarial Attacks in Computer Vision
Bhambri, Siddhant, Muku, Sumanyu, Tulasi, Avinash, Buduru, Arun Balaji
Machine learning has seen tremendous advances in the past few years which has lead to deep learning models being deployed in varied applications of day-to-day life. Attacks on such models using perturbations, particularly in real-life scenarios, pose a serious challenge to their applicability, pushing research into the direction which aims to enhance the robustness of these models. After the introduction of these perturbations by Szegedy et al., significant amount of research has focused on the reliability of such models, primarily in two aspects - white-box, where the adversary has access to the targeted model and related parameters; and the black-box, which resembles a real-life scenario with the adversary having almost no knowledge of the model to be attacked. We propose to attract attention on the latter scenario and thus, present a comprehensive comparative study among the different adversarial black-box attack approaches proposed till date. The second half of this literature survey focuses on the defense techniques. This is the first study, to the best of our knowledge, that specifically focuses on the black-box setting to motivate future work on the same.
Dec-3-2019
- Country:
- North America > United States > California (0.46)
- Genre:
- Research Report (1.00)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Transportation > Air (1.00)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning
- Evolutionary Systems (0.93)
- Neural Networks > Deep Learning (1.00)
- Statistical Learning (0.93)
- Representation & Reasoning > Search (0.93)
- Vision (1.00)
- Machine Learning
- Security & Privacy (1.00)
- Sensing and Signal Processing > Image Processing (1.00)
- Artificial Intelligence
- Information Technology