Code-Mixed Probes Show How Pre-Trained Models Generalise On Code-Switched Text
De Leon, Frances A. Laureano, Madabushi, Harish Tayyar, Lee, Mark
–arXiv.org Artificial Intelligence
Code-switching is a prevalent linguistic phenomenon in which multilingual individuals seamlessly alternate between languages. Despite its widespread use online and recent research trends in this area, research in code-switching presents unique challenges, primarily stemming from the scarcity of labelled data and available resources. In this study, we investigate how pre-trained Language Models handle code-switched text in three dimensions: a) the ability of PLMs to detect code-switched text, b) variations in the structural information that PLMs utilise to capture code-switched text, and c) the consistency of semantic information representation in code-switched text. To conduct a systematic and controlled evaluation of the language models in question, we create a novel dataset of well-formed naturalistic code-switched text along with parallel translations into the source languages. Our findings reveal that pre-trained language models are effective in generalising to code-switched text, shedding light on the abilities of these models to generalise representations to CS corpora.
arXiv.org Artificial Intelligence
May-7-2024
- Country:
- Europe (1.00)
- North America > United States (0.47)
- Genre:
- Research Report > New Finding (1.00)
- Technology: