NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: EJ733955
Record Type: Journal
Publication Date: 2005
Pages: 13
Abstractor: Author
ISBN: N/A
ISSN: ISSN-0735-6331
EISSN: N/A
General Models for Automated Essay Scoring: Exploring an Alternative to the Status Quo
Kelly, P. Adam
Journal of Educational Computing Research, v33 n1 p101-113 2005
Powers, Burstein, Chodorow, Fowles, and Kukich (2002) suggested that automated essay scoring (AES) may benefit from the use of "general" scoring models designed to score essays irrespective of the prompt for which an essay was written. They reasoned that such models may enhance score credibility by signifying that an AES system measures the same writing characteristics across all essays. They reported empirical evidence that general scoring models performed nearly as well in agreeing with human readers as did prompt-specific models, the "status quo" for most AES systems. In this study, general and prompt-specific models were again compared, but this time, general models performed as well as or better than prompt-specific models. Moreover, general models measured the same writing characteristics across all essays, while prompt-specific models measured writing characteristics idiosyncratic to the prompt. Further comparison of model performance across two different writing tasks and writing assessment programs bolstered the case for general models. (Contains 4 tables.)
Baywood Publishing Company, Inc., 26 Austin Avenue, Box 337, Amityville, NY 11701. Tel: 800-638-7819 (Toll Free); Fax: 631-691-1770; e-mail: info@baywood.com.
Publication Type: Journal Articles; Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A