question generation
Question Asking as Program Generation
Anselm Rothe, Brenden M. Lake, Todd Gureckis
A hallmark of human intelligence is the ability to ask rich, creative, and revealing questions. Here we introduce a cognitive model capable of constructing humanlike questions. Our approach treats questions as formal programs that, when executed on the state of the world, output an answer. The model specifies a probability distribution over a complex, compositional space of programs, favoring concise programs that help the agent learn in the current context. We evaluate our approach by modeling the types of open-ended questions generated by humans who were attempting to learn about an ambiguous situation in a game. We find that our model predicts what questions people will ask, and can creatively produce novel questions that were not present in the training set. In addition, we compare a number of model variants, finding that both question informativeness and complexity are important for producing human-like questions.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > Ohio (0.04)
- (3 more...)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
- Africa > Nigeria (0.14)
- Africa > Kenya (0.14)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- (38 more...)
- Media > News (1.00)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- (4 more...)
- Oceania > Australia > Victoria > Melbourne (0.05)
- North America > Canada (0.05)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- (7 more...)
- Information Technology > Artificial Intelligence > Natural Language > Text Processing (0.68)
- Information Technology > Artificial Intelligence > Natural Language > Machine Translation (0.68)
- Information Technology > Artificial Intelligence > Natural Language > Understanding (0.61)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.47)
EduAgentQG: A Multi-Agent Workflow Framework for Personalized Question Generation
Jia, Rui, Zhang, Min, Liu, Fengrui, Jiang, Bo, Kuang, Kun, Dai, Zhongxiang
Abstract--High-quality personalized question banks are crucial for supporting adaptive learning and individualized assessment. Manually designing questions is time-consuming and often fails to meet diverse learning needs, making automated question generation a crucial approach to reduce teachers' workload and improve the scalability of educational resources. However, most existing question generation methods rely on single-agent or rule-based pipelines, which still produce questions with unstable quality, limited diversity, and insufficient alignment with educational goals. T o address these challenges, we propose EduAgentQG, a multi-agent collaborative framework for generating high-quality and diverse personalized questions. The framework consists of five specialized agents and operates through an iterative feedback loop: the Planner generates structured design plans and multiple question directions to enhance diversity; the Writer produces candidate questions based on the plan and optimizes their quality and diversity using feedback from the Solver and Educator; the Solver and Educator perform binary scoring across multiple evaluation dimensions and feed the evaluation results back to the Writer; the Checker conducts final verification, including answer correctness and clarity, ensuring alignment with educational goals. Through this multi-agent collaboration and iterative feedback loop, EduAgentQG generates questions that are both high-quality and diverse, while maintaining consistency with educational objectives. Experiments on two mathematics question datasets demonstrate that EduAgentQG outperforms existing single-agent and multi-agent methods in terms of question diversity, goal consistency, and overall quality. High-quality personalized question banks are crucial for supporting adaptive learning and individualized assessment [1], [2], [3]. In practical teaching, experienced educators can often determine the specific educational goals a student needs to achieve based on observation and prior knowledge [4], [5], [6]. Teachers typically engage in iterative cycles of planning, drafting, validation, and optimization to design questions that are both diagnostically effective and pedagogically meaningful, balancing knowledge coverage, cognitive skill development, and difficulty levels [7], [8]. Existing question banks may not always contain suitable questions, and even when relevant questions are available, they may have been previously attempted by students [9], [10], [11].
- Asia > China > Shanghai > Shanghai (0.05)
- Asia > China > Guangdong Province > Shenzhen (0.04)
- North America > United States > Ohio (0.04)
- (2 more...)
- Research Report > New Finding (0.46)
- Instructional Material > Course Syllabus & Notes (0.34)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- (2 more...)
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Pacific Ocean > North Pacific Ocean > San Francisco Bay (0.04)
- North America > United States > Colorado (0.04)
- (3 more...)
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Europe > Denmark > Capital Region > Copenhagen (0.04)
- Information Technology > Artificial Intelligence > Natural Language > Machine Translation (0.88)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Search (0.70)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.69)
- Information Technology > Artificial Intelligence > Natural Language > Generation (0.54)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Asia > China (0.04)
One-Topic-Doesn't-Fit-All: Transcreating Reading Comprehension Test for Personalized Learning
Han, Jieun, Lee, Daniel, Yoo, Haneul, Yoon, Jinsung, Park, Junyeong, Kim, Suin, Ahn, So-Yeon, Oh, Alice
Personalized learning has gained attention in English as a Foreign Language (EFL) education, where engagement and motivation play crucial roles in reading comprehension. We propose a novel approach to generating personalized English reading comprehension tests tailored to students' interests. We develop a structured content transcreation pipeline using OpenAI's gpt-4o, where we start with the RACE-C dataset, and generate new passages and multiple-choice reading comprehension questions that are linguistically similar to the original passages but semantically aligned with individual learners' interests. Our methodology integrates topic extraction, question classification based on Bloom's taxonomy, linguistic feature analysis, and content transcreation to enhance student engagement. We conduct a controlled experiment with EFL learners in South Korea to examine the impact of interest-aligned reading materials on comprehension and motivation. Our results show students learning with personalized reading passages demonstrate improved comprehension and motivation retention compared to those learning with non-personalized materials.
- Asia > South Korea (0.26)
- Europe > Switzerland (0.05)
- North America > United States > Florida > Miami-Dade County > Miami (0.05)
- (4 more...)
- Research Report > Experimental Study (0.87)
- Research Report > Strength High (0.54)
- Research Report > New Finding (0.54)
Multi-Agent Collaborative Framework For Math Problem Generation
Karbasi, Kia, Hong, Kevin, Samadi, Mohammad Amin, Pottie, Gregory
Automatic question generation (AQG) for mathematics education remains an elusive goal for Intelligent Tutoring Systems and educators. While pre-trained transformer-based language models have significantly advanced natural language generation, they often struggle to precisely control problem complexity and cognitive demands. In this paper, we introduce a collaborative multi-agent framework as a novel method of incorporating inference-time computation into AQG. This approach leverages multiple agents that it-eratively refine generated question-answer pairs to better balance complexity and cognitive demand. We evaluate the generated questions on five meta-evaluation criteria: relevance, importance, clarity, difficulty matching, answerability, to assess the system's ability to control the required complexity and quality of the questions. Preliminary evaluations show that this collaborative multi-agent framework elevates the quality of generated educational content by fostering a more nuanced balance between cognitive challenge and clarity. These promising outcomes suggest that integrating collaborative multi-agent workflows can yield more controlled, pedagogically valuable content that can help advance automated educational content generation and adaptive learning environments.
- North America > United States > California > Los Angeles County > Los Angeles (0.29)
- North America > United States > New York (0.04)
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- (2 more...)
- Research Report > New Finding (0.46)
- Research Report > Promising Solution (0.34)
Difficulty-Controllable Multiple-Choice Question Generation Using Large Language Models and Direct Preference Optimization
Difficulty-controllable question generation for reading comprehension has gained significant attention in the field of education as a fundamental tool for adaptive learning support. Although several neural question generation methods have recently succeeded in controlling difficulty, conventional approaches still face two major limitations. First, they cannot directly generate multiple-choice questions, which are the most widely used question type in educational contexts. Second, they are not explicitly trained to optimize the accuracy of difficulty control, leaving room for further improvement in difficulty controllability. To address these limitations, this study proposes a novel difficulty-controllable multiple-choice question generation method for reading comprehension which leverages a large language model trained using a direct preference optimization technique to improve the accuracy of difficulty control.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- North America > United States > Florida > Palm Beach County > Boca Raton (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- Questionnaire & Opinion Survey (1.00)
- Research Report > New Finding (0.46)
- Information Technology > Artificial Intelligence > Natural Language > Question Answering (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)