Exploring Representational Disparities Between Multilingual and Bilingual Translation Models
Verma, Neha, Murray, Kenton, Duh, Kevin
–arXiv.org Artificial Intelligence
Multilingual machine translation has proven immensely useful for low-resource and zero-shot language pairs. However, language pairs in multilingual models sometimes see worse performance than in bilingual models, especially when translating in a one-to-many setting. To understand why, we examine the geometric differences in the representations from bilingual models versus those from one-to-many multilingual models. Specifically, we evaluate the isotropy of the representations, to measure how well they utilize the dimensions in their underlying vector space. Using the same evaluation data in both models, we find that multilingual model decoder representations tend to be less isotropic than bilingual model decoder representations. Additionally, we show that much of the anisotropy in multilingual decoder representations can be attributed to modeling language-specific information, therefore limiting remaining representational capacity.
arXiv.org Artificial Intelligence
May-23-2023
- Country:
- Asia > China
- Hong Kong (0.04)
- Europe
- Belgium > Brussels-Capital Region
- Brussels (0.05)
- Germany > Berlin (0.04)
- Ireland > Leinster
- County Dublin > Dublin (0.04)
- Italy > Tuscany
- Florence (0.04)
- Spain > Valencian Community
- Valencia Province > Valencia (0.04)
- Belgium > Brussels-Capital Region
- North America
- Dominican Republic (0.04)
- United States
- California (0.04)
- District of Columbia > Washington (0.04)
- Minnesota > Hennepin County
- Minneapolis (0.14)
- Pennsylvania (0.04)
- Washington > King County
- Seattle (0.04)
- Asia > China
- Genre:
- Research Report (0.64)
- Technology: