Fine-Tashkeel: Finetuning Byte-Level Models for Accurate Arabic Text Diacritization
Al-Rfooh, Bashar, Abandah, Gheith, Al-Rfou, Rami
–arXiv.org Artificial Intelligence
Most of previous work on learning diacritization of the Arabic language relied on training models from scratch. In this paper, we investigate how to leverage pre-trained language models to learn diacritization. We finetune token-free pre-trained multilingual models (ByT5) to learn to predict and insert missing diacritics in Arabic text, a complex task that requires understanding the sentence semantics and the morphological structure of the tokens. We show that we can achieve state-of-the-art on the diacritization task with minimal amount of training and no feature engineering, reducing WER by 40%. We release our finetuned models for the greater benefit of the researchers in the community.
arXiv.org Artificial Intelligence
Mar-25-2023
- Country:
- Europe > Ireland
- Leinster > County Dublin > Dublin (0.04)
- North America > United States
- Minnesota > Hennepin County > Minneapolis (0.14)
- Europe > Ireland
- Genre:
- Research Report (0.40)
- Technology: