NotesFAQContact Us
Search Tips
Peer reviewed Peer reviewed
PDF on ERIC Download full text
ERIC Number: EJ1168485
Record Type: Journal
Publication Date: 2017-Dec
Pages: 16
Abstractor: As Provided
ISSN: EISSN-2330-8516
An Investigation of the "e-rater"® Automated Scoring Engine's Grammar, Usage, Mechanics, and Style Microfeatures and Their Aggregation Model. Research Report. ETS RR-17-04
Chen, Jing; Zhang, Mo; Bejar, Isaac I.
ETS Research Report Series, Dec 2017
Automated essay scoring (AES) generally computes essay scores as a function of macrofeatures derived from a set of microfeatures extracted from the text using natural language processing (NLP). In the "e-rater"® automated scoring engine, developed at "Educational Testing Service" (ETS) for the automated scoring of essays, each of four macrofeatures ("grammar," "usage," "mechanics," and "style" [GUMS]) is computed from a set of microfeatures. Statistical analyses reveal that some of these microfeatures might not explain much of the variance in human scores regardless of the writing tasks. Currently, the microfeatures in the same macrofeature group are equally weighted to produce the macrofeature score. We propose an alternative weighting scheme that gives higher weights to the microfeatures that are more predictive of human scores in each macrofeature group. Our results suggest that even though there is negligible difference between the proposed and the current equal weighting schemes and the current model in terms of the prediction of human scores and the correlation with external measures, our scheme improves the consistency of the resultant macrofeature scores across writing tasks to a considerable extent.
Educational Testing Service. Rosedale Road, MS19-R Princeton, NJ 08541. Tel: 609-921-9000; Fax: 609-734-5410; e-mail:; Web site:
Publication Type: Journal Articles; Reports - Research
Education Level: Higher Education
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A