Are Transformers in Pre-trained LM A Good ASR Encoder? An Empirical Study
An, Keyu, Zhang, Shiliang, Yan, Zhijie
–arXiv.org Artificial Intelligence
Our underlying hypothesis posits that, despite being initially trained on text-based corpora, these transformers possess a remarkable capacity to extract effective features from the input sequence. This inherent capability, we argue, is transferrable to speech data, thereby augmenting the acoustic modeling ability of ASR. Through rigorous empirical analysis, our findings reveal a notable improvement in Character Error Rate (CER) and Word Error Rate (WER) across diverse ASR tasks when transformers from pre-trained LMs are incorporated. Particularly, they serve as an advantageous starting point for initializing ASR encoders. Furthermore, we uncover that these transformers, when integrated into a well-established ASR encoder, can significantly boost performance, especially in scenarios where profound semantic comprehension is pivotal. This underscores the potential of leveraging the semantic prowess embedded within pre-trained transformers to advance ASR systems' capabilities.
arXiv.org Artificial Intelligence
Sep-26-2024
- Country:
- Europe > Austria
- Vienna (0.14)
- North America > United States
- Pennsylvania (0.14)
- Europe > Austria
- Genre:
- Research Report > New Finding (0.49)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks (1.00)
- Natural Language (1.00)
- Speech (1.00)
- Information Technology > Artificial Intelligence