Goto

Collaborating Authors

 cognitive complexity


Can LLMs Estimate Cognitive Complexity of Reading Comprehension Items?

Hwang, Seonjeong, Kim, Hyounghun, Lee, Gary Geunbae

arXiv.org Artificial Intelligence

Estimating the cognitive complexity of reading comprehension (RC) items is crucial for assessing item difficulty before it is administered to learners. Unlike syntactic and semantic features, such as passage length or semantic similarity between options, cognitive features that arise during answer reasoning are not readily extractable using existing NLP tools and have traditionally relied on human annotation. In this study, we examine whether large language models (LLMs) can estimate the cognitive complexity of RC items by focusing on two dimensions-Evidence Scope and Transformation Level-that indicate the degree of cognitive burden involved in reasoning about the answer. Our experimental results demonstrate that LLMs can approximate the cognitive complexity of items, indicating their potential as tools for prior difficulty analysis. Further analysis reveals a gap between LLMs' reasoning ability and their metacognitive awareness: even when they produce correct answers, they sometimes fail to correctly identify the features underlying their own reasoning process.


YouLeQD: Decoding the Cognitive Complexity of Questions and Engagement in Online Educational Videos from Learners' Perspectives

Ming, Nong, Sharma, Sachin, Noh, Jiho

arXiv.org Artificial Intelligence

Questioning is a fundamental aspect of education, as it helps assess students' understanding, promotes critical thinking, and encourages active engagement. With the rise of artificial intelligence in education, there is a growing interest in developing intelligent systems that can automatically generate and answer questions and facilitate interactions in both virtual and in-person education settings. However, to develop effective AI models for education, it is essential to have a fundamental understanding of questioning. In this study, we created the YouTube Learners' Questions on Bloom's Taxonomy Dataset (YouLeQD), which contains learner-posed questions from YouTube lecture video comments. Along with the dataset, we developed two RoBERTa-based classification models leveraging Large Language Models to detect questions and analyze their cognitive complexity using Bloom's Taxonomy. This dataset and our findings provide valuable insights into the cognitive complexity of learner-posed questions in educational videos and their relationship with interaction metrics. This can aid in the development of more effective AI models for education and improve the overall learning experience for students.


Understanding the Cognitive Complexity in Language Elicited by Product Images

Chen, Yan-Ying, Hakimi, Shabnam, Van, Monica, Chen, Francine, Hong, Matthew, Klenk, Matt, Wu, Charlene

arXiv.org Artificial Intelligence

Product images (e.g., a phone) can be used to elicit a diverse set of consumer-reported features expressed through language, including surface-level perceptual attributes (e.g., "white") and more complex ones, like perceived utility (e.g., "battery"). The cognitive complexity of elicited language reveals the nature of cognitive processes and the context required to understand them; cognitive complexity also predicts consumers' subsequent choices. This work offers an approach for measuring and validating the cognitive complexity of human language elicited by product images, providing a tool for understanding the cognitive processes of human as well as virtual respondents simulated by Large Language Models (LLMs). We also introduce a large dataset that includes diverse descriptive labels for product images, including human-rated complexity. We demonstrate that human-rated cognitive complexity can be approximated using a set of natural language models that, combined, roughly capture the complexity construct. Moreover, this approach is minimally supervised and scalable, even in use cases with limited human assessment of complexity.


The Natural Roots of Artificial Intelligence

#artificialintelligence

To begin our exploration of AI we start with defining intelligence. The Oxford Universal Dictionary (1955) leans heavily on the word's Latin root, intelligere (to understand), defining intelligence as [1] the faculty of understanding; intellect; and [2] understanding as a quality admitting of degree; spec. While this focus on "understanding" does highlight the capacity to perceive meaning, it is overbroad; we need to look further for more clarity. Within academic circles, researchers do not have a universally shared definition of intelligence. Broadly speaking, there are four major viewpoints on intelligence that carry over to artificial intelligence research -- each with proponents and critics.