Retrieval Quality at Context Limit
–arXiv.org Artificial Intelligence
Abstract--The ability of large language models (LLMs) to recall and retrieve information from long contexts is critical for many real-world applications. Prior work (Liu et al., 2023) reported that LLMs suffer significant drops in retrieval accuracy for facts placed in the middle of large contexts, an effect known as "Lost in the Middle" (LITM). We find the model Gemini 2.5 Flash can answer needle-in-a-haystack questions with great accuracy regardless of document position including when the document is nearly at the input context limit. Our results suggest that the "Lost in the Middle" effect is not present for simple factoid Q&A in Gemini 2.5 Flash, indicating substantial improvements in long-context retrieval. Large language models (LLMs) have rapidly advanced in their ability to process and reason over long textual contexts, enabling applications in summarization, retrieval-augmented Q&A, document understanding, and more.
arXiv.org Artificial Intelligence
Nov-11-2025
- Country:
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Genre:
- Research Report > New Finding (0.69)
- Technology: