NotesFAQContact Us
Collection
Advanced
Search Tips
ERIC Number: ED458267
Record Type: Non-Journal
Publication Date: 2001
Pages: 9
Abstractor: N/A
Reference Count: N/A
ISBN: N/A
ISSN: N/A
Enriching Automated Essay Scoring Using Discourse Marking.
Burstein, Jill; Kukich, Karen; Wolff, Susanne; Lu, Chi; Chodorow, Martin
Electronic Essay Rater (e-rater) is a prototype automated essay scoring system built at Educational Testing Service that uses discourse marking in addition to syntactic information and topical content vector analyses to assign essay scores automatically. This paper gives a general description of e-rater as a whole, but its emphasis is on the importance of discourse marking and argument partitioning for annotating the argument structure of an essay. Two content vector analysis programs used to predict scores, EssayContent and ArgContent, have been compared. This paper reports on the overall evaluation results from e-rater's scoring performance on 13 sets of essay data from the writing assessment of the Graduate Management Admissions Test and 2 sets of essay data from the Test of Written English. EssayContent assigns scores to essays by using a standard cosine correlation that treats the essay as a "bag of words," in that it does not consider word order. ArgContent uses a novel content vector analysis approach for score assignment based on the individual arguments in an essay. The average agreement between ArgContent scores and human rater scores found in previous research was 82% as compared to 69% agreement between EssayContent and the human raters. These results suggest that discourse marking enriches e-raters' scoring capability. When e-rater uses its whole set of predictive features, agreement with human rater scores ranges from 87% to 94% across the 15 sets of essay responses used in this study. (Contains 3 tables and 16 references.) (Author/SLD)
Publication Type: Reports - Evaluative
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A