SPRING Lab IITM's submission to Low Resource Indic Language Translation Shared Task
Sayed, Hamees, Joglekar, Advait, Umesh, Srinivasan
–arXiv.org Artificial Intelligence
We develop a robust translation model for four low-resource Indic languages: Khasi, Mizo, Manipuri, and Assamese. Our approach includes a comprehensive pipeline from data collection and preprocessing to training and evaluation, leveraging data from WMT task datasets, BPCC, PMIndia, and OpenLanguageData. To address the scarcity of bilingual data, we use back-translation techniques on monolingual datasets for Mizo and Khasi, significantly expanding our training corpus. We fine-tune the pre-trained NLLB 3.3B model for Assamese, Mizo, and Manipuri, achieving improved performance over the baseline. For Khasi, which is not supported by the NLLB model, we introduce special tokens and train the model on our Khasi corpus. Our training involves masked language modelling, followed by fine-tuning for English-to-Indic and Indic-to-English translations.
arXiv.org Artificial Intelligence
Nov-11-2024
- Country:
- Asia
- Europe
- North America
- Canada > Ontario
- Toronto (0.04)
- United States
- Massachusetts > Middlesex County
- Cambridge (0.14)
- Michigan (0.04)
- Pennsylvania > Philadelphia County
- Philadelphia (0.04)
- Massachusetts > Middlesex County
- Canada > Ontario
- Genre:
- Research Report (0.83)
- Technology: