Plug-and-Play Co-Occurring Face Attention for Robust Audio-Visual Speaker Extraction
Pan, Zexu, Zhao, Shengkui, Wang, Tingting, Zhou, Kun, Ma, Yukun, Zhang, Chong, Ma, Bin
–arXiv.org Artificial Intelligence
Audio-visual speaker extraction isolates a target speaker's speech from a mixture speech signal conditioned on a visual cue, typically using the target speaker's face recording. However, in real-world scenarios, other co-occurring faces are often present on-screen, providing valuable speaker activity cues in the scene. In this work, we introduce a plug-and-play inter-speaker attention module to process these flexible numbers of co-occurring faces, allowing for more accurate speaker extraction in complex multi-person environments. We integrate our module into two prominent models: the A V -DPRNN and the state-of-the-art A V -TFGridNet. Extensive experiments on diverse datasets, including the highly overlapped V oxCeleb2 and sparsely overlapped MISP, demonstrate that our approach consistently outperforms baselines. Furthermore, cross-dataset evaluations on LRS2 and LRS3 confirm the robustness and gen-eralizability of our method.
arXiv.org Artificial Intelligence
May-28-2025
- Country:
- Asia
- China > Jiangsu Province
- Nanjing (0.04)
- Singapore (0.04)
- China > Jiangsu Province
- Asia
- Genre:
- Research Report (0.64)
- Industry:
- Leisure & Entertainment (0.35)
- Technology: