A Chain-of-Thought Prompting Approach with LLMs for Evaluating Students' Formative Assessment Responses in Science
Cohn, Clayton, Hutchins, Nicole, Le, Tuan, Biswas, Gautam
–arXiv.org Artificial Intelligence
This paper explores the use of large language models (LLMs) to score and explain short-answer assessments in K-12 science. While existing methods can score more structured math and computer science assessments, they often do not provide explanations for the scores. Our study focuses on employing GPT-4 for automated assessment in middle school Earth Science, combining few-shot and active learning with chain-of-thought reasoning. Using a human-in-the-loop approach, we successfully score and provide meaningful explanations for formative assessment responses. A systematic analysis of our method's pros and cons sheds light on the potential for human-in-the-loop techniques to enhance automated grading for open-ended science assessments.
arXiv.org Artificial Intelligence
Mar-21-2024
- Country:
- North America > United States > New York (0.14)
- Genre:
- Research Report (1.00)
- Industry:
- Technology: