Are Deep Speech Denoising Models Robust to Adversarial Noise?
Schwarzer, Will, Thomas, Philip S., Fanelli, Andrea, Liu, Xiaoyu
–arXiv.org Artificial Intelligence
Deep noise suppression (DNS) models enjoy widespread use throughout a variety of high-stakes speech applications. However, in this paper, we show that four recent DNS models can each be reduced to outputting unintelligible gibberish through the addition of imperceptible adversarial noise. Furthermore, our results show the near-term plausibility of targeted attacks, which could induce models to output arbitrary utterances, and over-the-air attacks. While the success of these attacks varies by model and setting, and attacks appear to be strongest when model-specific (i.e., white-box and non-transferable), our results highlight a pressing need for practical countermeasures in DNS systems.
arXiv.org Artificial Intelligence
Mar-14-2025
- Country:
- North America > United States
- California (0.14)
- New York (0.14)
- North America > United States
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: