Automated Grading of Students' Handwritten Graphs: A Comparison of Meta-Learning and Vision-Large Language Models

Parsaeifard, Behnam, Hlosta, Martin, Bergamin, Per

arXiv.org Artificial Intelligence 

--With the rise of online learning, the demand for efficient and consistent assessment in mathematics has significantly increased over the past decade. Machine Learning (ML), particularly Natural Language Processing (NLP), has been widely used for autograding student responses, particularly those involving text and/or mathematical expressions. However, there has been limited research on autograding responses involving students' handwritten graphs, despite their prevalence in Science, T echnology, Engineering, and Mathematics (STEM) curricula. In this study, we implement multimodal meta-learning models for autograding images containing students' handwritten graphs and text. We further compare the performance of Vision Large Language Models (VLLMs) with these specially trained meta-learning models. Our results, evaluated on a real-world dataset collected from our institution, show that the best-performing meta-learning models outperform VLLMs in 2-way classification tasks. In contrast, in more complex 3-way classification tasks, the best-performing VLLMs slightly outperform the meta-learning models. While VLLMs show promising results, their reliability and practical applicability remain uncertain and require further investigation. S online education has gained popularity, the need for efficient and scalable methods of automatically grading and assessing student work has become increasingly important. Automated grading offers several advantages, including scalability, time efficiency, grading consistency, and immediate feedback. Early research on automated grading primarily focused on closed-ended questions, such as multiple-choice and fill-in-the-blank questions, where responses could be easily verified using rule-based systems [1], [2].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found