NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: EJ995507
Record Type: Journal
Publication Date: 2013-Jan
Pages: 9
Abstractor: As Provided
ISBN: N/A
ISSN: ISSN-1075-2935
EISSN: N/A
Large-Scale Assessment, Locally-Developed Measures, and Automated Scoring of Essays: Fishing for Red Herrings?
Condon, William
Assessing Writing, v18 n1 p100-108 Jan 2013
Automated Essay Scoring (AES) has garnered a great deal of attention from the rhetoric and composition/writing studies community since the Educational Testing Service began using e-rater[R] and the "Criterion"[R] Online Writing Evaluation Service as products in scoring writing tests, and most of the responses have been negative. While the criticisms leveled at AES are reasonable, the more important, underlying issues relate to the aspects of the writing construct of the tests AES can rate. Because these tests underrepresent the construct as it is understood by the writing community, such tests should not be used in writing assessment, whether for admissions, placement, formative, or achievement testing. Instead of continuing the traditional, large-scale, commercial testing enterprise associated with AES, we should look to well-established, institutionally contextualized forms of assessment as models that yield fuller, richer information about the student's control of the writing construct. Such tests would be more valid, as reliable, and far fairer to the test-takers, whose stakes are often quite high. (Contains 1 figure.)
Elsevier. 6277 Sea Harbor Drive, Orlando, FL 32887-4800. Tel: 877-839-7126; Tel: 407-345-4020; Fax: 407-363-1354; e-mail: usjcs@elsevier.com; Web site: http://www.elsevier.com
Publication Type: Journal Articles; Reports - Evaluative
Education Level: Elementary Secondary Education; Higher Education; Postsecondary Education
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A