Conan-Embedding-v2: Training an LLM from Scratch for Text Embeddings
Li, Shiyu, Tang, Yang, Liu, Ruijie, Chen, Shi-Zhe, Chen, Xi
–arXiv.org Artificial Intelligence
Large language models (LLMs) have recently demonstrated excellent performance in text embedding tasks. Previous work usually use LoRA to fine-tune existing LLMs, which are limited by the data and training gap between LLMs and embedding models. In this work, we introduce Conan-embedding-v2, a new 1.4B-parameter LLM trained from scratch and fine-tuned as a text embedder. First, we add news data and multilingual pairs for LLM pretraining to bridge the data gap. Based on this, we propose a cross-lingual retrieval dataset that enables the LLM to better integrate embeddings across different languages. Second, whereas LLMs use a causal mask with token-level loss, embedding models use a bidirectional mask with sentence-level loss. This training gap makes full fine-tuning less effective than LoRA. We introduce a soft-masking mechanism to gradually transition between these two types of masks, enabling the model to learn more comprehensive representations. Based on this, we propose a dynamic hard negative mining method that exposes the model to more difficult negative examples throughout the training process. Being intuitive and effective, with only approximately 1.4B parameters, Conan-embedding-v2 achieves SOTA performance on both the Massive Text Embedding Benchmark (MTEB) and Chinese MTEB (May 19, 2025).
arXiv.org Artificial Intelligence
Sep-17-2025
- Country:
- Asia
- Middle East > UAE (0.04)
- Myanmar > Tanintharyi Region
- Dawei (0.04)
- Thailand > Bangkok
- Bangkok (0.04)
- Europe
- Italy > Calabria
- Catanzaro Province > Catanzaro (0.04)
- Slovenia > Drava
- Municipality of Benedikt > Benedikt (0.04)
- Italy > Calabria
- North America > United States
- Texas > Travis County > Austin (0.04)
- Asia
- Genre:
- Research Report > New Finding (0.67)
- Technology: