Analyzing Cognitive Plausibility of Subword Tokenization
–arXiv.org Artificial Intelligence
Subword tokenization has become the de-facto standard for tokenization, although comparative evaluations of subword vocabulary quality across languages are scarce. Existing evaluation studies focus on the effect of a tokenization algorithm on the performance in downstream tasks, or on engineering criteria such as the compression rate. We present a new evaluation paradigm that focuses on the cognitive plausibility of subword tokenization. We analyze the correlation of the tokenizer output with the response time and accuracy of human performance on a lexical decision task. We compare three tokenization algorithms across several languages and vocabulary sizes. Our results indicate that the UnigramLM algorithm yields less cognitively plausible tokenization behavior and a worse coverage of derivational morphemes, in contrast with prior work.
arXiv.org Artificial Intelligence
Oct-20-2023
- Country:
- Asia > Middle East
- Israel (0.14)
- Europe > Germany (0.28)
- North America > United States
- Maryland (0.14)
- Asia > Middle East
- Genre:
- Research Report
- Experimental Study (0.68)
- New Finding (0.88)
- Research Report
- Technology: