YAD: Leveraging T5 for Improved Automatic Diacritization of Yor\`ub\'a Text
Olawole, Akindele Michael, Alabi, Jesujoba O., Sakpere, Aderonke Busayo, Adelani, David I.
–arXiv.org Artificial Intelligence
In addition, we pre-train text-to-text transformer, T5 model for Yorùbá and showed that this model outperform several multilingually trained T5 models. Lastly, we showed that more data and larger models are better at diacritization for Yorùbá Introduction Yorùbá, a language spoken predominantly in West Africa, is renowned for its tonal nature which is characterized by a heavy use of diacritics to signify tone variations. In Yorùbá and many other languages, diacritics play a crucial role in disambiguating word meanings and in word pronunciation, making accurate diacritization essential for effective communication and language processing tasks (Skiredj & Berrada, 2024). However, manual diacritization is time-consuming and requires specialized linguistic expertise, motivating the development of automatic diacritization systems. In recent years, significant progress has been made in natural language processing (NLP) techniques, leading to the exploration of various approaches to automate the diacritization process for languages using diacritics (Náplava et al., 2018; Mubarak et al., 2019; Náplava et al., 2021; Stankevicius et al., 2022, inter alia) including Yorùbá (Orife, 2018; Orife et al., 2020).
arXiv.org Artificial Intelligence
Dec-28-2024
- Country:
- Africa (0.87)
- Europe (1.00)
- North America > United States
- Minnesota > Hennepin County > Minneapolis (0.14)
- Genre:
- Research Report (0.83)
- Technology: