MEXMA: Token-level objectives improve sentence representations

Janeiro, João Maria, Piwowarski, Benjamin, Gallinari, Patrick, Barrault, Loïc

arXiv.org Artificial Intelligence 

Creating general-purpose multilingual embeddings has attracted significant attention from the research community in recent years, driven by the growing need for efficient and effective cross-lingual representations. Cross-Lingual Sentence Encoders (CLSE) create fixed-size sentence representations that are able to capture the relevant information in a sentence, and are aligned across languages. By capturing relevant sentence information in a shared multilingual space, these aligned representations enable efficient comparison and retrieval based on distance measures, thereby facilitating their effective utilization in various downstream applications. Current CLSE (Duquenne et al., 2023; Feng et al., 2022) typically build upon pre-trained encoders, often language models (Conneau et al., 2020; Devlin et al., 2019) or translation models (NLLB Team et al., 2022). These pre-trained encoders have been trained using objectives that focus on individual words or tokens, i.e. token-level objectives.