Emergent Word Order Universals from Cognitively-Motivated Language Models
Kuribayashi, Tatsuki, Ueda, Ryo, Yoshida, Ryo, Oseki, Yohei, Briscoe, Ted, Baldwin, Timothy
–arXiv.org Artificial Intelligence
The world's languages exhibit certain so-called typological or implicational universals; for example, Subject-Object-Verb (SOV) languages typically use postpositions. Explaining the source of such biases is a key goal of linguistics. We study word-order universals through a computational simulation with language models (LMs). Our experiments show that typologically-typical word orders tend to have lower perplexity estimated by LMs with cognitively plausible biases: syntactic biases, specific parsing strategies, and memory limitations. This suggests that the interplay of cognitive biases and predictability (perplexity) can explain many aspects of word-order universals. It also showcases the advantage of cognitively-motivated LMs, typically employed in cognitive modeling, in the simulation of language universals.
arXiv.org Artificial Intelligence
Jun-7-2024
- Country:
- Asia > Japan (0.14)
- Europe > Spain (0.14)
- North America > United States (0.14)
- Genre:
- Research Report > New Finding (1.00)
- Technology: