A judge in Northern California dealt a blow this week to a controversial campaign to make teachers more accountable for their students' level of achievement, the second key setback in recent months for those behind the effort. The ruling by Contra Costa County Superior Court Judge Barry Goode went against the Bay Area group Students Matter. The group's lawsuit aimed to force 13 school districts, including seven in Southern California, to make student standardized test scores a key part of teacher evaluations. Students Matter had hoped to build on a 2012 ruling against the Los Angeles Unified School District, which led to a settlement under which test scores were supposed to become part of teacher evaluations. But in Doe vs. Antioch, the case decided this week, the judge concluded that districts had broad discretion over how to use test results.
In this article, we describe a deployed educational technology application: the Criterion Online Essay Evaluation Service, a web-based system that provides automated scoring and evaluation of student essays. Criterion has two complementary applications: (1) CritiqueWriting Analysis Tools, a suite of programs that detect errors in grammar, usage, and mechanics, that identify discourse elements in the essay, and that recognize potentially undesirable elements of style, and (2) e-rater version 2.0, an automated essay scoring system. Critique and e-rater provide students with feedback that is specific to their writing in order to help them improve their writing skills and is intended to be used under the instruction of a classroom teacher. All of these capabilities outperform baseline algorithms, and some of the tools agree with human judges in their evaluations as often as two judges agree with each other.
Writing a good essay typically involves students revising an initial paper draft after receiving feedback. We present eRevise, a web-based writing and revising environment that uses natural language processing features generated for rubric-based essay scoring to trigger formative feedback messages regarding students' use of evidence in response-to-text writing. By helping students understand the criteria for using text evidence during writing, eRevise empowers students to better revise their paper drafts. In a pilot deployment of eRevise in 7 classrooms spanning grades 5 and 6, the quality of text evidence usage in writing improved after students received formative feedback then engaged in paper revision.
Roscoe, Rod D. (Arizona State University) | Crossley, Scott A. (Georgia State University) | Snow, Erica L. (Arizona State University) | Varner, Laura K. (Arizona State University) | McNamara, Danielle S. (Arizona State University)
Automated essay scoring tools are often criticized on the basis of construct validity. Specifically, it has been argued that computational scoring algorithms may be unaligned to higher-level indicators of quality writing, such as writers’ demonstrated knowledge and understanding of the essay topics. In this paper, we consider how and whether the scoring algorithms within an intelligent writing tutor correlate with measures of writing proficiency and students’ general knowledge, reading comprehension, and vocabulary skill. Results indicate that the computational algorithms, although less attuned to knowledge and comprehension factors than human raters, were marginally related to such variables. Implications for improving automated scoring and intelligent tutoring of writing are briefly discussed.