Houdini: Fooling Deep Structured Visual and Speech Recognition Models with Adversarial Examples
Moustapha M. Cisse, Yossi Adi, Natalia Neverova, Joseph Keshet
–Neural Information Processing Systems
Generating adversarial examples is a critical step for evaluating and improving the robustness of learning machines. So far, most existing methods only work for classification and are not designed to alter the true performance measure of the problem at hand. We introduce a novel flexible approach named Houdini for generating adversarial examples specifically tailored for the final performance measure of the task considered, be it combinatorial and non-decomposable. We successfully apply Houdini to a range of applications such as speech recognition, pose estimation and semantic segmentation. In all cases, the attacks based on Houdini achieve higher success rate than those based on the traditional surrogates used to train the models while using a less perceptible adversarial perturbation.
Neural Information Processing Systems
Oct-4-2024, 07:01:22 GMT
- Genre:
- Research Report (0.68)
- Industry:
- Information Technology > Security & Privacy (0.47)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning
- Neural Networks > Deep Learning (1.00)
- Performance Analysis > Accuracy (0.69)
- Natural Language (1.00)
- Speech > Speech Recognition (0.87)
- Vision (0.90)
- Machine Learning
- Information Technology > Artificial Intelligence