Human Detection of Political Speech Deepfakes across Transcripts, Audio, and Video
Groh, Matthew, Sankaranarayanan, Aruna, Singh, Nikhil, Kim, Dong Young, Lippman, Andrew, Picard, Rosalind
–arXiv.org Artificial Intelligence
Recent advances in technology for hyper-realistic visual effects provoke the concern that deepfake videos of political speeches will soon be visually indistinguishable from authentic video recordings. The conventional wisdom in communication theory predicts people will fall for fake news more often when the same version of a story is presented as a video versus text. We conduct 4 pre-registered randomized experiments with 2,015 participants to evaluate how accurately humans distinguish real political speeches from fabrications across base rates of misinformation, audio sources, and media modalities. We find base rates of misinformation minimally influence discernment and deepfakes with audio produced by the state-of-the-art text-to-speech algorithms are harder to discern than the same deepfakes with voice actor audio. Moreover, we find audio and visual information enables more accurate discernment than text alone: human discernment relies more on how something is said, the audio-visual cues, than what is said, the speech content.
arXiv.org Artificial Intelligence
Jul-3-2023
- Country:
- North America > United States > Massachusetts (0.28)
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (1.00)
- Research Report
- Industry:
- Government > Regional Government
- Information Technology > Security & Privacy (1.00)
- Media > News (1.00)
- Technology: