NotesFAQContact Us
Search Tips
Showing all 3 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Bridgeman, Brent; Trapani, Catherine; Attali, Yigal – Applied Measurement in Education, 2012
Essay scores generated by machine and by human raters are generally comparable; that is, they can produce scores with similar means and standard deviations, and machine scores generally correlate as highly with human scores as scores from one human correlate with scores from another human. Although human and machine essay scores are highly related…
Descriptors: Scoring, Essay Tests, College Entrance Examinations, High Stakes Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Bridgeman, Brent; Trapani, Catherine – Journal of Technology, Learning, and Assessment, 2010
A generic approach in automated essay scoring produces scores that have the same meaning across all prompts, existing or new, of a writing assessment. This is accomplished by using a single set of linguistic indicators (or features), a consistent way of combining and weighting these features into essay scores, and a focus on features that are not…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Test Scoring Machines
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Walters, Alyssa M.; Lee, Soonmook; Trapani, Catherine – ETS Research Report Series, 2004
The study investigated the applicability of previous experimental research on stereotype threat to operational Graduate Record Examinations┬« (GRE┬«) General Test testing centers. The goal was to document any relationships between features of the testing environment that might cue stereotype threat as well as any impact on GRE test scores among…
Descriptors: College Entrance Examinations, Graduate Study, Education Service Centers, Testing Problems