Incremental Learning for End-to-End Automatic Speech Recognition
Fu, Li, Li, Xiaoxiao, Zi, Libo
We propose a new incremental learning for end-to-end Automatic Speech Recognition (ASR) to extend the model's capacity on a new task while retaining the performance on previous ones. The proposed method is effective without accessing to the old dataset to address the issues of high retraining cost and unavailable old dataset. To achieve this, both attention distillation and knowledge distillation are applied to preserve the ability of the old model during the progressive learning. With an ASR model pre-trained on 12,000h Mandarin speech, we test our proposed method on 300h new scenario task and 1h new named entities task. Experiments show that our method yields 3.25% and 0.88% absolute Character Error Rate (CER) reduction on the new scenario, when compared with the pre-trained model and the full-data retraining baseline, respectively. It even yields a surprising 0.37% absolute CER reduction on the new scenario than the fine-tuning. For the new named entities task, our method significantly improves the accuracy compared with the pre-trained model, i.e. 16.95% absolute CER reduction. For both of the new task adaptions, the new models still maintain a same accuracy with the retraining baseline on the old tasks.
Sep-10-2020
- Genre:
- Research Report (0.64)
- Industry:
- Education (0.70)
- Information Technology > Security & Privacy (0.93)
- Technology: