NotesFAQContact Us
Search Tips
Peer reviewed Peer reviewed
PDF on ERIC Download full text
ERIC Number: EJ1110371
Record Type: Journal
Publication Date: 2011-Jun
Pages: 73
Abstractor: As Provided
ISSN: EISSN-2330-8516
Validation of Automated Scores of TOEFL iBT® Tasks against Nontest Indicators of Writing Ability. TOEFL iBT® Research Report. TOEFL iBT-15. ETS Research Report RR-11-24
Weigle, Sara Cushing
ETS Research Report Series, Jun 2011
Automated scoring has the potential to dramatically reduce the time and costs associated with the assessment of complex skills such as writing, but its use must be validated against a variety of criteria for it to be accepted by test users and stakeholders. This study addresses two validity-related issues regarding the use of e-rater® with the independent writing task on the TOEFL iBT® (Internet-based test). First, relationships between automated scores of iBT tasks and nontest indicators of writing ability were examined. This was followed by exploration of prompt-related differences in automated scores of essays written by the same examinees. Correlations between both human and e-rater scores and nontest indicators were moderate but consistent, with few differences between e-rater and human rater scores. E-rater was more consistent across prompts than individual human raters, although there were differences in scores across prompts for the individual features used to generate total e-rater scores.
Educational Testing Service. Rosedale Road, MS19-R Princeton, NJ 08541. Tel: 609-921-9000; Fax: 609-734-5410; e-mail:; Web site:
Publication Type: Journal Articles; Reports - Research; Tests/Questionnaires
Education Level: Higher Education; Postsecondary Education
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Identifiers - Location: California (Los Angeles); Georgia; Indiana; Iowa; Michigan; Minnesota; New York; Washington
Identifiers - Assessments and Surveys: Test of English as a Foreign Language
Grant or Contract Numbers: N/A