e2h-gsm8k
- North America > United States > Maryland (0.04)
- North America > United States > California (0.04)
- Asia > Middle East > Jordan (0.04)
- Overview (1.00)
- Research Report > New Finding (0.92)
- Questionnaire & Opinion Survey (0.67)
- Law (1.00)
- Education (1.00)
- Information Technology > Security & Privacy (0.93)
- (2 more...)
LLMs Encode How Difficult Problems Are
Lugoloobi, William, Russell, Chris
Large language models exhibit a puzzling inconsistency: they solve complex problems yet frequently fail on seemingly simpler ones. We investigate whether LLMs internally encode problem difficulty in a way that aligns with human judgment, and whether this representation tracks generalization during reinforcement learning post-training. We train linear probes across layers and token positions on 60 models, evaluating on mathematical and coding subsets of Easy2HardBench. We find that human-labeled difficulty is strongly linearly decodable (AMC: $ρ\approx 0.88$) and exhibits clear model-size scaling, whereas LLM-derived difficulty is substantially weaker and scales poorly. Steering along the difficulty direction reveals that pushing models toward "easier" representations reduces hallucination and improves accuracy. During GRPO training on Qwen2.5-Math-1.5B, the human-difficulty probe strengthens and positively correlates with test accuracy across training steps, while the LLM-difficulty probe degrades and negatively correlates with performance. These results suggest that human annotations provide a stable difficulty signal that RL amplifies, while automated difficulty estimates derived from model performance become misaligned precisely as models improve. We release probe code and evaluation scripts to facilitate replication.
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > Maryland (0.04)
- North America > United States > California (0.04)
- Asia > Middle East > Jordan (0.04)
- Overview (1.00)
- Research Report > New Finding (0.92)
- Questionnaire & Opinion Survey (0.67)
- Education (1.00)
- Information Technology > Security & Privacy (0.93)
- Law > Statutes (0.67)
- (3 more...)
Easy2Hard-Bench: Standardized Difficulty Labels for Profiling LLM Performance and Generalization
Ding, Mucong, Deng, Chenghao, Choo, Jocelyn, Wu, Zichu, Agrawal, Aakriti, Schwarzschild, Avi, Zhou, Tianyi, Goldstein, Tom, Langford, John, Anandkumar, Anima, Huang, Furong
While generalization over tasks from easy to hard is crucial to profile language models (LLMs), the datasets with fine-grained difficulty annotations for each problem across a broad range of complexity are still blank. Aiming to address this limitation, we present Easy2Hard-Bench, a consistently formatted collection of 6 benchmark datasets spanning various domains, such as mathematics and programming problems, chess puzzles, and reasoning questions. Each problem within these datasets is annotated with numerical difficulty scores. To systematically estimate problem difficulties, we collect abundant performance data on attempts to each problem by humans in the real world or LLMs on the prominent leaderboard. Leveraging the rich performance data, we apply well-established difficulty ranking systems, such as Item Response Theory (IRT) and Glicko-2 models, to uniformly assign numerical difficulty scores to problems. Moreover, datasets in Easy2Hard-Bench distinguish themselves from previous collections by a higher proportion of challenging problems. Through extensive experiments with six state-of-the-art LLMs, we provide a comprehensive analysis of their performance and generalization capabilities across varying levels of difficulty, with the aim of inspiring future research in LLM generalization. The datasets are available at https://huggingface.co/datasets/furonghuang-lab/Easy2Hard-Bench.
- North America > United States > Maryland (0.04)
- North America > United States > California (0.04)
- Asia > Middle East > Jordan (0.04)
- Research Report > New Finding (1.00)
- Overview (1.00)
- Law (1.00)
- Education (1.00)
- Information Technology > Security & Privacy (0.93)
- (2 more...)