Contrastive Loss is All You Need to Recover Analogies as Parallel Lines
Ri, Narutatsu, Lee, Fei-Tzin, Verma, Nakul
–arXiv.org Artificial Intelligence
While static word embedding models are known to represent linguistic analogies as parallel lines in high-dimensional space, the underlying mechanism as to why they result in such geometric structures remains obscure. We find that an elementary contrastive-style method employed over distributional information performs competitively with popular word embedding models on analogy recovery tasks, while achieving dramatic speedups in training time. Further, we demonstrate that a contrastive loss is sufficient to create these parallel structures in word embeddings, and establish a precise relationship between the co-occurrence statistics and the geometric structure of the resulting word embeddings.
arXiv.org Artificial Intelligence
Jun-13-2023
- Country:
- Africa
- Eritrea (0.04)
- Middle East
- Sudan (0.04)
- Asia > Middle East
- Qatar > Ad-Dawhah
- Doha (0.04)
- Saudi Arabia (0.04)
- Yemen (0.04)
- Qatar > Ad-Dawhah
- Europe
- Germany > Berlin (0.04)
- Italy > Tuscany
- Florence (0.04)
- United Kingdom > England
- Lancashire > Lancaster (0.04)
- Indian Ocean > Red Sea (0.04)
- North America
- Canada (0.04)
- United States
- California (0.04)
- Georgia > Fulton County
- Atlanta (0.04)
- Louisiana > Orleans Parish
- New Orleans (0.04)
- Africa
- Genre:
- Research Report (0.82)
- Technology: