Ge, Meng
Mamba-SEUNet: Mamba UNet for Monaural Speech Enhancement
Wang, Junyu, Lin, Zizhen, Wang, Tianrui, Ge, Meng, Wang, Longbiao, Dang, Jianwu
In parallel, developments in state-space models (SSM) [8], [20] present a promising alternative with linear complexity Speech enhancement (SE) tasks aim to improve speech and high efficiency in handling long-sequence inputs. Mamba clarity by suppressing background noise, reverberation, and [21], as a novel structured SSM (S4), introduces a selective other acoustic interferences, thereby optimizing user experience processing mechanism for input information and an efficient and communication efficacy. In recent years, with the hardware-aware algorithm, achieving performance comparable rapid development of deep learning, a variety of representative to or exceeding Transformer-based methods across domains neural networks have emerged, especially those based on such as natural language, image, and audio [22]-[24]. Particularly, convolutional neural networks (CNN) [1]-[4], transformers a recent work [25] demonstrated improved performance [5]-[7], and U-Net architectures [8]-[10]. Generally, depending with reduced FLOPs by simply replacing the conformer in on the processing method of the input signal, it can be MP-SENet with Mamba, further validating the effectiveness broadly categorized into time-domain and time-frequency (T-of Mamba in speech processing tasks.
The NUS-HLT System for ICASSP2024 ICMC-ASR Grand Challenge
Ge, Meng, Peng, Yizhou, Jiang, Yidi, Lin, Jingru, Ao, Junyi, Yildirim, Mehmet Sinan, Wang, Shuai, Li, Haizhou, Feng, Mengling
This paper summarizes our team's efforts in both tracks of the ICMC-ASR Challenge for in-car multi-channel automatic speech recognition. Our submitted systems for ICMC-ASR Challenge include the multi-channel front-end enhancement and diarization, training data augmentation, speech recognition modeling with multi-channel branches. Tested on the offical Eval1 and Eval2 set, our best system achieves a relative 34.3% improvement in CER and 56.5% improvement in cpCER, compared to the offical baseline system.
MIMO-DBnet: Multi-channel Input and Multiple Outputs DOA-aware Beamforming Network for Speech Separation
Fu, Yanjie, Yin, Haoran, Ge, Meng, Wang, Longbiao, Zhang, Gaoyan, Dang, Jianwu, Deng, Chengyun, Wang, Fei
Recently, many deep learning based beamformers have been proposed for multi-channel speech separation. Nevertheless, most of them rely on extra cues known in advance, such as speaker feature, face image or directional information. In this paper, we propose an end-to-end beamforming network for direction guided speech separation given merely the mixture signal, namely MIMO-DBnet. Specifically, we design a multi-channel input and multiple outputs architecture to predict the direction-of-arrival based embeddings and beamforming weights for each source. The precisely estimated directional embedding provides quite effective spatial discrimination guidance for the neural beamformer to offset the effect of phase wrapping, thus allowing more accurate reconstruction of two sources' speech signals. Experiments show that our proposed MIMO-DBnet not only achieves a comprehensive decent improvement compared to baseline systems, but also maintain the performance on high frequency bands when phase wrapping occurs.
MIMO-DoAnet: Multi-channel Input and Multiple Outputs DoA Network with Unknown Number of Sound Sources
Yin, Haoran, Ge, Meng, Fu, Yanjie, Zhang, Gaoyan, Wang, Longbiao, Zhang, Lei, Qiu, Lin, Dang, Jianwu
Recent neural network based Direction of Arrival (DoA) estimation algorithms have performed well on unknown number of sound sources scenarios. These algorithms are usually achieved by mapping the multi-channel audio input to the single output (i.e. overall spatial pseudo-spectrum (SPS) of all sources), that is called MISO. However, such MISO algorithms strongly depend on empirical threshold setting and the angle assumption that the angles between the sound sources are greater than a fixed angle. To address these limitations, we propose a novel multi-channel input and multiple outputs DoA network called MIMO-DoAnet. Unlike the general MISO algorithms, MIMO-DoAnet predicts the SPS coding of each sound source with the help of the informative spatial covariance matrix. By doing so, the threshold task of detecting the number of sound sources becomes an easier task of detecting whether there is a sound source in each output, and the serious interaction between sound sources disappears during inference stage. Experimental results show that MIMO-DoAnet achieves relative 18.6% and absolute 13.3%, relative 34.4% and absolute 20.2% F1 score improvement compared with the MISO baseline system in 3, 4 sources scenes. The results also demonstrate MIMO-DoAnet alleviates the threshold setting problem and solves the angle assumption problem effectively.