NotesFAQContact Us
Search Tips
Peer reviewed Peer reviewed
PDF on ERIC Download full text
ERIC Number: EJ843855
Record Type: Journal
Publication Date: 2006-Aug
Pages: 36
Abstractor: As Provided
Reference Count: 56
ISSN: ISSN-1540-2525
An Overview of Automated Scoring of Essays
Dikli, Semire
Journal of Technology, Learning, and Assessment, v5 n1 Aug 2006
Automated Essay Scoring (AES) is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003). AES systems are mainly used to overcome time, cost, reliability, and generalizability issues in writing assessment (Bereiter, 2003; Burstein, 2003; Chung & O'Neil, 1997; Hamp-Lyons, 2001; Myers, 2003; Page, 2003; Rudner & Gagne, 2001; Rudner & Liang, 2002; Sireci & Rizavi, 1999). AES continues attracting the attention of public schools, universities, testing companies, researchers and educators (Burstein, Kukich, Wolf, Lu, & Chodorow, 1998; Shermis & Burstein, 2003; Sireci & Rizavi, 1999). The main purpose of this article is to provide an overview of current approaches to AES. The article will describe the most widely used AES systems including Project Essay Grader[TM] (PEG), Intelligent Essay Assessor[TM] (IEA), E-rater[R] and Criterion[SM], IntelliMetric[TM] and MY Access![R], and Bayesian Essay Test Scoring System[TM] (BETSY). It will also discuss the main characteristics of these systems and current issues regarding the use of them both in low-stakes assessment (in classrooms) and high-stakes assessment (as standardized tests). (Contains 12 endnotes, 2 tables, and 5 figures.)
Technology and Assessment Study Collaborative. 332 Campion Hall, Boston College, Chestnut Hill, MA 02467. Tel: 617-552-4521; Fax: 617-552-8419; e-mail:; Web site:
Publication Type: Journal Articles; Reports - Descriptive
Education Level: Elementary Secondary Education; Higher Education; Postsecondary Education
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A