Exploring syntactic information in sentence embeddings through multilingual subject-verb agreement
Nastase, Vivi, Jiang, Chunyang, Samo, Giuseppe, Merlo, Paola
–arXiv.org Artificial Intelligence
In this paper, our goal is to investigate to what degree multilingual pretrained language models capture cross-linguistically valid abstract linguistic representations. We take the approach of developing curated synthetic data on a large scale, with specific properties, and using them to study sentence representations built using pretrained language models. We use a new multiple-choice task and datasets, Blackbird Language Matrices (BLMs), to focus on a specific grammatical structural phenomenon -- subject-verb agreement across a variety of sentence structures -- in several languages. Finding a solution to this task requires a system detecting complex linguistic patterns and paradigms in text representations. Using a two-level architecture that solves the problem in two steps -- detect syntactic objects and their properties in individual sentences, and find patterns across an input sequence of sentences -- we show that despite having been trained on multilingual texts in a consistent manner, multilingual pretrained language models have language-specific differences, and syntactic structure is not shared, even across closely related languages.
arXiv.org Artificial Intelligence
Sep-10-2024
- Country:
- Asia
- China > Hong Kong (0.04)
- Middle East > UAE
- Abu Dhabi Emirate > Abu Dhabi (0.04)
- Europe
- Croatia > Dubrovnik-Neretva County
- Dubrovnik (0.04)
- France (0.04)
- Italy > Tuscany
- Florence (0.04)
- Pisa Province > Pisa (0.04)
- Switzerland > Geneva
- Geneva (0.04)
- Croatia > Dubrovnik-Neretva County
- North America
- Canada > Ontario
- Toronto (0.04)
- United States > Minnesota
- Hennepin County > Minneapolis (0.14)
- Canada > Ontario
- Oceania > Australia
- Asia
- Genre:
- Research Report (0.40)
- Industry:
- Education (0.34)
- Technology: