Improve LLM-based Automatic Essay Scoring with Linguistic Features
Hou, Zhaoyi Joey, Ciuba, Alejandro, Li, Xiang Lorraine
–arXiv.org Artificial Intelligence
Automatic Essay Scoring (AES) assigns scores to student essays, reducing the grading workload for instructors. Developing a scoring system capable of handling essays across diverse prompts is challenging due to the flexibility and diverse nature of the writing task. Existing methods typically fall into two categories: supervised feature-based approaches and large language model (LLM)-based methods. Supervised feature-based approaches often achieve higher performance but require resource-intensive training. In contrast, LLM-based methods are computationally efficient during inference but tend to suffer from lower performance. This paper combines these approaches by incorporating linguistic features into LLM-based scoring. Experimental results show that this hybrid method outperforms baseline models for both in-domain and out-of-domain writing prompts.
arXiv.org Artificial Intelligence
Feb-13-2025
- Country:
- Asia (0.93)
- Europe (0.93)
- North America > United States
- Washington > King County > Seattle (0.28)
- Genre:
- Research Report > New Finding (0.48)
- Technology: