Can LLMs Estimate Cognitive Complexity of Reading Comprehension Items?
Hwang, Seonjeong, Kim, Hyounghun, Lee, Gary Geunbae
–arXiv.org Artificial Intelligence
Estimating the cognitive complexity of reading comprehension (RC) items is crucial for assessing item difficulty before it is administered to learners. Unlike syntactic and semantic features, such as passage length or semantic similarity between options, cognitive features that arise during answer reasoning are not readily extractable using existing NLP tools and have traditionally relied on human annotation. In this study, we examine whether large language models (LLMs) can estimate the cognitive complexity of RC items by focusing on two dimensions-Evidence Scope and Transformation Level-that indicate the degree of cognitive burden involved in reasoning about the answer. Our experimental results demonstrate that LLMs can approximate the cognitive complexity of items, indicating their potential as tools for prior difficulty analysis. Further analysis reveals a gap between LLMs' reasoning ability and their metacognitive awareness: even when they produce correct answers, they sometimes fail to correctly identify the features underlying their own reasoning process.
arXiv.org Artificial Intelligence
Oct-30-2025
- Country:
- Asia > China (0.04)
- Europe
- Middle East > Malta
- Port Region > Southern Harbour District > Floriana (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Middle East > Malta
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Technology: