EXECUTE: A Multilingual Benchmark for LLM Token Understanding
Edman, Lukas, Schmid, Helmut, Fraser, Alexander
–arXiv.org Artificial Intelligence
The CUTE benchmark showed that LLMs struggle with character understanding in English. We extend it to more languages with diverse scripts and writing systems, introducing EXECUTE. Our simplified framework allows easy expansion to any language. Tests across multiple LLMs reveal that challenges in other languages are not always on the character level as in English. Some languages show word-level processing issues, some show no issues at all. We also examine sub-character tasks in Chinese, Japanese, and Korean to assess LLMs' understanding of character components.
arXiv.org Artificial Intelligence
May-26-2025
- Country:
- Asia
- Europe > Germany
- Bavaria > Upper Bavaria > Munich (0.05)
- North America
- Canada > Ontario
- Toronto (0.04)
- United States
- Florida > Miami-Dade County
- Miami (0.04)
- Washington > King County
- Seattle (0.04)
- Florida > Miami-Dade County
- Canada > Ontario
- Genre:
- Research Report (0.82)
- Technology: