NotesFAQContact Us
Search Tips
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: EJ893464
Record Type: Journal
Publication Date: 2010-Jul
Pages: 18
Abstractor: As Provided
Reference Count: 44
ISSN: ISSN-0265-5322
Complementing Human Judgment of Essays Written by English Language Learners with E-Rater[R] Scoring
Enright, Mary K.; Quinlan, Thomas
Language Testing, v27 n3 p317-334 Jul 2010
E-rater[R] is an automated essay scoring system that uses natural language processing techniques to extract features from essays and to model statistically human holistic ratings. Educational Testing Service has investigated the use of e-rater, in conjunction with human ratings, to score one of the two writing tasks on the TOEFL-iBT[R] writing section. In this article we describe the TOEFL iBT writing section and an e-rater model proposed to provide one of two ratings for the Independent writing task. We discuss how the evidence for a process that uses both human and e-rater scoring is relevant to four components in a validity argument: (a) Evaluation--observations of performance on the writing task are scored to provide evidence of targeted writing skills; (b) Generalization--scores on the writing task provide estimates of expected scores over relevant parallel versions of the task and across raters; (c) Extrapolation--expected scores on the writing task are consistent with other measures of writing ability; and (d) Utilization--scores on the writing task are useful in educational contexts. Finally, we propose directions for future research that will strengthen the case for using complementary methods of scoring to improve the assessment of EFL writing. (Contains 1 figure and 5 tables.)
SAGE Publications. 2455 Teller Road, Thousand Oaks, CA 91320. Tel: 800-818-7243; Tel: 805-499-9774; Fax: 800-583-2665; e-mail:; Web site:
Publication Type: Journal Articles; Reports - Descriptive
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A