Correlated Bigram LSA for Unsupervised Language Model Adaptation
Tam, Yik-cheung, Schultz, Tanja
–Neural Information Processing Systems
We present a correlated bigram LSA approach for unsupervised LM adaptation for automatic speech recognition. The model is trained using efficient variational EM and smoothed using the proposed fractional Kneser-Ney smoothing which handles fractional counts. We address the scalability issue to large training corpora via bootstrapping of bigram LSA from unigram LSA. For LM adaptation, unigram and bigram LSA are integrated into the background N-gram LM via marginal adaptation and linear interpolation respectively. Experimental results on the Mandarin RT04test set show that applying unigram and bigram LSA together yields 6%-8% relative perplexity reduction and 2.5% relative character error rate reduction whichis statistically significant compared to applying only unigram LSA. On the large-scale evaluation on Arabic, 3% relative word error rate reduction is achieved which is also statistically significant.
Neural Information Processing Systems
Dec-31-2009
- Country:
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.14)
- Genre:
- Research Report (0.46)
- Technology: