Goto

Collaborating Authors

 Tang, Yuxun


Dynamic-SUPERB Phase-2: A Collaboratively Expanding Benchmark for Measuring the Capabilities of Spoken Language Models with 180 Tasks

arXiv.org Artificial Intelligence

Multimodal foundation models, such as Gemini and ChatGPT, have revolutionized human-machine interactions by seamlessly integrating various forms of data. Developing a universal spoken language model that comprehends a wide range of natural language instructions is critical for bridging communication gaps and facilitating more intuitive interactions. However, the absence of a comprehensive evaluation benchmark poses a significant challenge. We present Dynamic-SUPERB Phase-2, an open and evolving benchmark for the comprehensive evaluation of instruction-based universal speech models. Building upon the first generation, this second version incorporates 125 new tasks contributed collaboratively by the global research community, expanding the benchmark to a total of 180 tasks, making it the largest benchmark for speech and audio evaluation. While the first generation of Dynamic-SUPERB was limited to classification tasks, Dynamic-SUPERB Phase-2 broadens its evaluation capabilities by introducing a wide array of novel and diverse tasks, including regression and sequence generation, across speech, music, and environmental audio. Evaluation results indicate that none of the models performed well universally. SALMONN-13B excelled in English ASR, while WavLLM demonstrated high accuracy in emotion recognition, but current models still require further innovations to handle a broader range of tasks. We will soon open-source all task data and the evaluation pipeline.


SVDD Challenge 2024: A Singing Voice Deepfake Detection Challenge Evaluation Plan

arXiv.org Artificial Intelligence

The rapid advancement of AI-generated singing voices, which now closely mimic natural human singing and align seamlessly with musical scores, has led to heightened concerns for artists and the music industry. Unlike spoken voice, singing voice presents unique challenges due to its musical nature and the presence of strong background music, making singing voice deepfake detection (SVDD) a specialized field requiring focused attention. To promote SVDD research, we recently proposed the "SVDD Challenge," the very first research challenge focusing on SVDD for lab-controlled and in-the-wild bonafide and deepfake singing voice recordings. The challenge will be held in conjunction with the 2024 IEEE Spoken Language Technology Workshop (SLT 2024).


Findings of the 2023 ML-SUPERB Challenge: Pre-Training and Evaluation over More Languages and Beyond

arXiv.org Artificial Intelligence

The benchmark primarily focuses on evaluating SSL models for automatic speech recognition (ASR) and language identification The 2023 Multilingual Speech Universal Performance Benchmark (LID). To cater to different use cases for SSL models, ML-SUPERB (ML-SUPERB) Challenge expands upon the acclaimed SUPERB includes two tracks with four different tasks: the monolingual framework, emphasizing self-supervised models in multilingual track (monolingual ASR) and the multilingual track (multilingual speech recognition and language identification. The challenge comprises ASR, LID, joint multilingual ASR/LID). Similar to SUPERB, MLa research track focused on applying ML-SUPERB to specific SUPERB utilizes frozen SSL models as feature extractors and multilingual subjects, a Challenge Track for model submissions, employs a lightweight downstream model that can be fine-tuned for and a New Language Track where language resource researchers different tracks to achieve high training efficiency. The released can contribute and evaluate their low-resource language data in the public benchmark of ML-SUPERB covers 143 languages, making it context of the latest progress in multilingual speech recognition.