Approach to Learning Generalized Audio Representation Through Batch Embedding Covariance Regularization and Constant-Q Transforms
Shah, Ankit, Chen, Shuyi, Zhou, Kejun, Chen, Yue, Raj, Bhiksha
–arXiv.org Artificial Intelligence
General-purpose embedding is highly desirable for few-shot even zero-shot learning in many application scenarios, including the audio tasks. In order to understand representations better, we conducted thorough error analysis and visualization of HEAR 2021 submission results. Inspired by the analysis, this work experiments with different front-end audio preprocessing methods, including Constant-Q Transform (CQT) and Short-time Fourier transform (STFT), and proposes a Batch Embedding Covariance Regularization (BECR) term to uncover a more holistic simulation of the frequency information received by the human auditory system. We tested the models on the suite of HEAR 2021 tasks, which encompass a broad category of tasks. Preliminary results show (1) the proposed BECR can incur a more dispersed embedding on the test set, (2) BECR improves the PaSST model without extra computation complexity, and (3) STFT preprocessing outperforms CQT in all tasks we tested.
arXiv.org Artificial Intelligence
Mar-6-2023
- Country:
- Asia (0.15)
- North America > United States
- Pennsylvania > Allegheny County > Pittsburgh (0.16)
- Genre:
- Research Report (0.70)
- Technology: