Translate, then Parse! A strong baseline for Cross-Lingual AMR Parsing
Uhrig, Sarah, Garcia, Yoalli Rezepka, Opitz, Juri, Frank, Anette
–arXiv.org Artificial Intelligence
In cross-lingual Abstract Meaning Representation (AMR) parsing, researchers develop models that project sentences from various languages onto their AMRs to capture their essential semantic structures: given a sentence in any language, we aim to capture its core semantic content through concepts connected by manifold types of semantic relations. Methods typically leverage large silver training data to learn a single model that is able to project non-English sentences to AMRs. However, we find that a simple baseline tends to be over-looked: translating the sentences to English and projecting their AMR with a monolingual AMR parser (translate+parse,T+P). In this paper, we revisit this simple two-step base-line, and enhance it with a strong NMT system and a strong AMR parser. Our experiments show that T+P outperforms a recent state-of-the-art system across all tested languages: German, Italian, Spanish and Mandarin with +14.6, +12.6, +14.3 and +16.0 Smatch points.
arXiv.org Artificial Intelligence
Jun-8-2021
- Country:
- Asia
- China (0.29)
- Middle East > Republic of Türkiye (0.14)
- Europe > Spain
- Canary Islands (0.14)
- North America > United States
- Pennsylvania (0.14)
- Asia
- Genre:
- Research Report (0.82)
- Technology: