Make Your LLM Fully Utilize the Context
–Neural Information Processing Systems
While many contemporary large language models (LLMs) can process lengthy input, they still struggle to fully utilize information within the long context, known as the challenge.We hypothesize that it stems from insufficient explicit supervision during the long-context training, which fails to emphasize that any position in a long context can hold crucial information.Based on this intuition, our study presents information-intensive (IN2) training
Neural Information Processing Systems
Dec-26-2025, 08:05:36 GMT