NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: EJ890804
Record Type: Journal
Publication Date: 2010
Pages: 12
Abstractor: As Provided
ISBN: N/A
ISSN: ISSN-1075-2935
EISSN: N/A
Can Machine Scoring Deal with Broad and Open Writing Tests as Well as Human Readers?
McCurry, Doug
Assessing Writing, v15 n2 p118-129 2010
This article considers the claim that machine scoring of writing test responses agrees with human readers as much as humans agree with other humans. These claims about the reliability of machine scoring of writing are usually based on specific and constrained writing tasks, and there is reason for asking whether machine scoring of writing requires specific and constrained tasks to produce results that mimic human judgements. The conclusion of a National Assessment of Educational Progress (NAEP) report on the online assessment of writing that "the automated scoring of essay responses did not agree with the scores awarded by human readers" is discussed. The article presents the results of a trial in which two software programmes for scoring writing test responses were compared with the results of the human scoring of a broad and open writing test. The trial showed that "automated essay scoring" (AES) did not grade the broad and open writing task responses as reliably as human markers. (Contains 6 tables.)
Elsevier. 6277 Sea Harbor Drive, Orlando, FL 32887-4800. Tel: 877-839-7126; Tel: 407-345-4020; Fax: 407-363-1354; e-mail: usjcs@elsevier.com; Web site: http://www.elsevier.com
Publication Type: Journal Articles; Reports - Evaluative
Education Level: Elementary Secondary Education
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Identifiers - Assessments and Surveys: National Assessment of Educational Progress
Grant or Contract Numbers: N/A