Trillion 7B Technical Report
Han, Sungjun, Suk, Juyoung, An, Suyeong, Kim, Hyungguk, Kim, Kyuseok, Yang, Wonsuk, Choi, Seungtaek, Shin, Jamin
–arXiv.org Artificial Intelligence
We introduce Trillion-7B, the most token-efficient Korean-centric multilingual LLM available. Our novel Cross-lingual Document Attention (XLDA) mechanism enables highly efficient and effective knowledge transfer from English to target languages like Korean and Japanese. Combined with optimized data mixtures, language-specific filtering, and tailored tokenizer construction, Trillion-7B achieves competitive performance while dedicating only 10\% of its 2T training tokens to multilingual data and requiring just 59.4K H100 GPU hours (\$148K) for full training. Comprehensive evaluations across 27 benchmarks in four languages demonstrate Trillion-7B's robust multilingual performance and exceptional cross-lingual consistency.
arXiv.org Artificial Intelligence
Apr-23-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe
- Italy > Calabria
- Catanzaro Province > Catanzaro (0.04)
- Monaco (0.04)
- Italy > Calabria
- North America > United States (0.04)
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.92)
- Technology: