Cross-Domain Audio Deepfake Detection: Dataset and Analysis
Li, Yuang, Zhang, Min, Ren, Mengxin, Ma, Miaomiao, Wei, Daimeng, Yang, Hao
–arXiv.org Artificial Intelligence
Audio deepfake detection (ADD) is essential for preventing the misuse of synthetic voices that may infringe on personal rights and privacy. Recent zero-shot text-to-speech (TTS) models pose higher risks as they can clone voices with a single utterance. However, the existing ADD datasets are outdated, leading to suboptimal generalization of detection models. In this paper, we construct a new cross-domain ADD dataset comprising over 300 hours of speech data that is generated by five advanced zero-shot TTS models. To simulate real-world scenarios, we employ diverse attack methods and audio prompts from different datasets. Experiments show that, through novel attack-augmented training, the Wav2Vec2-large and Whisper-medium models achieve equal error rates of 4.1\% and 6.5\% respectively. Additionally, we demonstrate our models' outstanding few-shot ADD ability by fine-tuning with just one minute of target-domain data. Nonetheless, neural codec compressors greatly affect the detection accuracy, necessitating further research.
arXiv.org Artificial Intelligence
Apr-7-2024
- Genre:
- Research Report (0.40)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks (1.00)
- Natural Language (1.00)
- Speech (1.00)
- Information Technology > Artificial Intelligence