EfficientTDNN: Efficient Architecture Search for Speaker Recognition in the Wild

Wang, Rui, Wei, Zhihua, Ji, Shouling, Hong, Zhen

arXiv.org Artificial Intelligence 

Speaker recognition refers to audio biometrics that utilizes acoustic characteristics for automatic speaker recognition. These systems have emerged as an essential means of verifying identity in various scenarios, such as smart homes, general business interactions, e-commerce applications, and forensics. However, the mismatch between training and real-world data causes a shift of speaker embedding space and severely degrades the recognition performance. Various complicated neural architectures are presented to address speaker recognition in the wild but neglect the requirements of storage and computation. To address this issue, we propose a neural architecture search-based efficient time-delay neural network (EfficientTDNN) to improve inference efficiency while maintaining recognition accuracy. The proposed EfficientTDNN contains three phases. First, supernet design is to construct a dynamic neural architecture that consists of sequential cells and enables network pruning. Second, progressive training is to optimize randomly sampled subnets that inherit the weights of the supernet. Third, three search methods, including manual grid search, random search, and model predictive evolutionary search, are introduced to find a trade-off between accuracy and efficiency. Results of experiments on the VoxCeleb dataset show EfficientTDNN provides a huge search space including approximately $10^{13}$ subnets and achieves 1.66% EER and 0.156 DCF$_{0.01}$ with 565M MACs. Comprehensive investigation suggests that the trained supernet generalizes cells unseen during training and obtains an acceptable balance between accuracy and efficiency.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found