ERIC Number: ED381994
Record Type: RIE
Publication Date: 1994-Feb
Reference Count: N/A
Using FACETS To Model Rater Training Effects. Draft.
Weigle, Sara Cushing
This paper describes a study on rater training that involved the analysis of ratings given to English-as-a-Second-Language (ESL) compositions by 8 inexperienced and 8 experienced raters both before and after rater training, using FACETS (Linacre, 1990, 1993), which provides measures of rater severity and consistency. The testing text was a 50-minute composition essay, with 2 prompts, from the ESL Placement Examination (ESLPE) at the University of California, Los Angeles. Compositions were rated using the ESLPE Rating Scale on content, rhetorical control, and language. Each essay was read by two raters, primarily ESL faculty and teaching assistants, and the scores averaged. All raters attended mandatory composition rater training. FACETS provided a 4-faceted model based on estimates of examinee ability, rater harshness, scale difficulty, and prompt difficulty. Pre-training, all raters as a group differed quite significantly from one another in terms of severity. Post-training, a clear distinction between new and old raters is no longer visible. Findings indicate that rater severity evened out somewhat after training across the group. However, the spread of rater severities after training is still quite significant. Rater consistency improved, and new rater extremism was reduced. Results confirm that rater training cannot make raters into duplicates of one another, but it can make them more self-consistent. Appendixes include the ESLPE rating guidelines and sample ESLPE. (Contains 25 references.) (NAV)
Publication Type: Reports - Research
Education Level: N/A
Authoring Institution: N/A
Identifiers: FACETS Computer Program; University of California Los Angeles
Note: Paper presented at the Language Testing Research Colloquium (Washington, DC, 1994).