Tone prediction and orthographic conversion for Basaa
Nikitin, Ilya, O'Connor, Brian, Safonova, Anastasia
–arXiv.org Artificial Intelligence
In this paper, we present a seq2seq approach for transliterating missionary Basaa orthographies into the official orthography. Our model uses pre-trained Basaa missionary and official orthography corpora using BERT. Since Basaa is a low-resource language, we have decided to use the mT5 model for our project. Before training our model, we pre-processed our corpora by eliminating one-to-one correspondences between spellings and unifying characters variably containing either one to two characters into single-character form. Our best mT5 model achieved a CER equal to 12.6747 and a WER equal to 40.1012.
arXiv.org Artificial Intelligence
Oct-13-2022
- Country:
- Africa > Cameroon (0.06)
- Asia > Russia (0.05)
- Europe
- Czechia > South Moravian Region
- Brno (0.04)
- Russia (0.05)
- Czechia > South Moravian Region
- North America > United States
- Florida > Palm Beach County
- Boca Raton (0.04)
- Indiana (0.04)
- Washington > King County
- Seattle (0.04)
- Florida > Palm Beach County
- Oceania > Australia
- Australian Capital Territory > Canberra (0.04)
- Genre:
- Research Report (0.40)
- Technology: