jauhiainen
Geographically-Informed Language Identification
Dunn, Jonathan, Edwards-Brown, Lane
This paper develops an approach to language identification in which the set of languages considered by the model depends on the geographic origin of the text in question. Given that many digital corpora can be geo-referenced at the country level, this paper formulates 16 region-specific models, each of which contains the languages expected to appear in countries within that region. These regional models also each include 31 widely-spoken international languages in order to ensure coverage of these linguae francae regardless of location. An upstream evaluation using traditional language identification testing data shows an improvement in f-score ranging from 1.7 points (Southeast Asia) to as much as 10.4 points (North Africa). A downstream evaluation on social media data shows that this improved performance has a significant impact on the language labels which are applied to large real-world corpora. The result is a highly-accurate model that covers 916 languages at a sample size of 50 characters, the performance improved by incorporating geographic information into the model.
- Asia > Southeast Asia (0.24)
- Africa > North Africa (0.24)
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- (27 more...)
- Research Report > New Finding (0.69)
- Research Report > Experimental Study (0.47)
LIMIT: Language Identification, Misidentification, and Translation using Hierarchical Models in 350+ Languages
Agarwal, Milind, Alam, Md Mahfuz Ibn, Anastasopoulos, Antonios
Knowing the language of an input text/audio is a necessary first step for using almost every NLP tool such as taggers, parsers, or translation systems. Language identification is a well-studied problem, sometimes even considered solved; in reality, due to lack of data and computational challenges, current systems cannot accurately identify most of the world's 7000 languages. To tackle this bottleneck, we first compile a corpus, MCS-350, of 50K multilingual and parallel children's stories in 350+ languages. MCS-350 can serve as a benchmark for language identification of short texts and for 1400+ new translation directions in low-resource Indian and African languages. Second, we propose a novel misprediction-resolution hierarchical model, LIMIt, for language identification that reduces error by 55% (from 0.71 to 0.32) on our compiled children's stories dataset and by 40% (from 0.23 to 0.14) on the FLORES-200 benchmark. Our method can expand language identification coverage into low-resource languages by relying solely on systemic misprediction patterns, bypassing the need to retrain large models from scratch.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Europe > Ukraine (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- (28 more...)
Overview of GUA-SPA at IberLEF 2023: Guarani-Spanish Code Switching Analysis
Chiruzzo, Luis, Agüero-Torales, Marvin, Giménez-Lugo, Gustavo, Alvarez, Aldo, Rodríguez, Yliana, Góngora, Santiago, Solorio, Thamar
We present the first shared task for detecting and analyzing code-switching in Guarani and Spanish, GUA-SPA at IberLEF 2023. The challenge consisted of three tasks: identifying the language of a token, NER, and a novel task of classifying the way a Spanish span is used in the code-switched context. We annotated a corpus of 1500 texts extracted from news articles and tweets, around 25 thousand tokens, with the information for the tasks. Three teams took part in the evaluation phase, obtaining in general good results for Task 1, and more mixed results for Tasks 2 and 3.
- Europe > Finland > Uusimaa > Helsinki (0.04)
- Africa > Equatorial Guinea (0.04)
- South America > Uruguay > Montevideo > Montevideo (0.04)
- (13 more...)
Findings of the VarDial Evaluation Campaign 2023
Aepli, Noëmi, Çöltekin, Çağrı, Van Der Goot, Rob, Jauhiainen, Tommi, Kazzaz, Mourhaf, Ljubešić, Nikola, North, Kai, Plank, Barbara, Scherrer, Yves, Zampieri, Marcos
This report presents the results of the shared tasks organized as part of the VarDial Evaluation Campaign 2023. The campaign is part of the tenth workshop on Natural Language Processing (NLP) for Similar Languages, Varieties and Dialects (VarDial), co-located with EACL 2023. Three separate shared tasks were included this year: Slot and intent detection for low-resource language varieties (SID4LR), Discriminating Between Similar Languages -- True Labels (DSL-TL), and Discriminating Between Similar Languages -- Speech (DSL-S). All three tasks were organized for the first time this year.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Europe > Croatia > Dubrovnik-Neretva County > Dubrovnik (0.05)
- North America > United States > New Mexico > Santa Fe County > Santa Fe (0.04)
- (25 more...)
Two-stage Pipeline for Multilingual Dialect Detection
Dialect Identification is a crucial task for localizing various Large Language Models. This paper outlines our approach to the VarDial 2023 shared task. Here we have to identify three or two dialects from three languages each which results in a 9-way classification for Track-1 and 6-way classification for Track-2 respectively. Our proposed approach consists of a two-stage system and outperforms other participants' systems and previous works in this domain. We achieve a score of 58.54% for Track-1 and 85.61% for Track-2. Our codebase is available publicly (https://github.com/ankit-vaidya19/EACL_VarDial2023).
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- Europe > Ukraine (0.05)
- (6 more...)
Language Variety Identification with True Labels
Zampieri, Marcos, North, Kai, Jauhiainen, Tommi, Felice, Mariano, Kumari, Neha, Nair, Nishant, Bangera, Yash
Language identification is an important first step in many IR and NLP applications. Most publicly available language identification datasets, however, are compiled under the assumption that the gold label of each instance is determined by where texts are retrieved from. Research has shown that this is a problematic assumption, particularly in the case of very similar languages (e.g., Croatian and Serbian) and national language varieties (e.g., Brazilian and European Portuguese), where texts may contain no distinctive marker of the particular language or variety. To overcome this important limitation, this paper presents DSL True Labels (DSL-TL), the first human-annotated multilingual dataset for language variety identification. DSL-TL contains a total of 12,900 instances in Portuguese, split between European Portuguese and Brazilian Portuguese; Spanish, split between Argentine Spanish and Castilian Spanish; and English, split between American English and British English. We trained multiple models to discriminate between these language varieties, and we present the results in detail. The data and models presented in this paper provide a reliable benchmark toward the development of robust and fairer language variety identification systems. We make DSL-TL freely available to the research community.
- North America > United States (0.14)
- South America > Brazil (0.04)
- South America > Argentina (0.04)
- (7 more...)
Comparing Approaches to Dravidian Language Identification
Jauhiainen, Tommi, Ranasinghe, Tharindu, Zampieri, Marcos
This paper describes the submissions by team HWR to the Dravidian Language Identification (DLI) shared task organized at VarDial 2021 workshop. The DLI training set includes 16,674 YouTube comments written in Roman script containing code-mixed text with English and one of the three South Dravidian languages: Kannada, Malayalam, and Tamil. We submitted results generated using two models, a Naive Bayes classifier with adaptive language models, which has shown to obtain competitive performance in many language and dialect identification tasks, and a transformer-based model which is widely regarded as the state-of-the-art in a number of NLP tasks. Our first submission was sent in the closed submission track using only the training set provided by the shared task organisers, whereas the second submission is considered to be open as it used a pretrained model trained with external data. Our team attained shared second position in the shared task with the submission based on Naive Bayes. Our results reinforce the idea that deep learning methods are not as competitive in language identification related tasks as they are in many other text classification tasks.
- Europe > Finland > Uusimaa > Helsinki (0.04)
- Europe > Czechia > Prague (0.04)
- North America > United States (0.04)
- (2 more...)