Long-Tail Crisis in Nearest Neighbor Language Models
Nishida, Yuto, Morishita, Makoto, Deguchi, Hiroyuki, Kamigaito, Hidetaka, Watanabe, Taro
–arXiv.org Artificial Intelligence
The $k$-nearest-neighbor language model ($k$NN-LM), one of the retrieval-augmented language models, improves the perplexity for given text by directly accessing a large datastore built from any text data during inference. A widely held hypothesis for the success of $k$NN-LM is that its explicit memory, i.e., the datastore, enhances predictions for long-tail phenomena. However, prior works have primarily shown its ability to retrieve long-tail contexts, leaving the model's performance remain underexplored in estimating the probabilities of long-tail target tokens during inference. In this paper, we investigate the behavior of $k$NN-LM on low-frequency tokens, examining prediction probability, retrieval accuracy, token distribution in the datastore, and approximation error of the product quantization. Our experimental results reveal that $k$NN-LM does not improve prediction performance for low-frequency tokens but mainly benefits high-frequency tokens regardless of long-tail contexts in the datastore.
arXiv.org Artificial Intelligence
Mar-28-2025
- Country:
- Asia
- Middle East
- Jordan (0.04)
- UAE > Abu Dhabi Emirate
- Abu Dhabi (0.04)
- Singapore (0.04)
- Middle East
- North America
- Canada > Ontario
- Toronto (0.04)
- Dominican Republic (0.04)
- United States (0.04)
- Canada > Ontario
- Asia
- Genre:
- Research Report > New Finding (0.93)
- Technology: