From Handwriting to Feedback: Evaluating VLMs and LLMs for AI-Powered Assessment in Indonesian Classrooms
Aisyah, Nurul, Kautsar, Muhammad Dehan Al, Hidayat, Arif, Chowdhury, Raqib, Koto, Fajri
–arXiv.org Artificial Intelligence
Despite rapid progress in vision-language and large language models (VLMs and LLMs), their effectiveness for AI-driven educational assessment in real-world, underrepresented classrooms remains largely unexplored. We evaluate state-of-the-art VLMs and LLMs on over 14K handwritten answers from grade-4 classrooms in Indonesia, covering Mathematics and English aligned with the local national curriculum. Unlike prior work on clean digital text, our dataset features naturally curly, diverse handwriting from real classrooms, posing realistic visual and linguistic challenges. Assessment tasks include grading and generating personalized Indonesian feedback guided by rubric-based evaluation. Results show that the VLM struggles with handwriting recognition, causing error propagation in LLM grading, yet LLM feedback remains pedagogically useful despite imperfect visual inputs, revealing limits in personalization and contextual relevance.
arXiv.org Artificial Intelligence
Oct-10-2025
- Country:
- Asia
- Indonesia
- Java > West Java (0.04)
- Nusa Tenggara Islands (0.04)
- Sumatra > West Sumatra (0.04)
- West Nusa Tenggara (0.04)
- Middle East > Jordan (0.04)
- Indonesia
- Europe
- Monaco (0.04)
- Spain > Catalonia
- Barcelona Province > Barcelona (0.04)
- North America > United States (0.04)
- Asia
- Genre:
- Instructional Material
- Course Syllabus & Notes (0.34)
- Online (0.34)
- Research Report > New Finding (0.34)
- Instructional Material
- Industry:
- Education
- Assessment & Standards > Student Performance (0.69)
- Curriculum > Subject-Specific Education (0.93)
- Educational Setting (0.94)
- Education
- Technology: