ERIC Number: ED422397
Record Type: RIE
Publication Date: 1998-Apr
A Comparison of Two Scoring Strategies for Performance Assessments.
Crehan, Kevin D.; Hudson, Rhoton
The aim of this study was to explore a method of improving the objectivity, reliability, and efficiency of scoring performance assessments that involve constructed written responses. Millman (1997) has suggested an alternative to using model responses at each score category. The proposed strategy, hypothesized to increase scorer reliability and cost effectiveness, would model answers judged to be halfway between the score categories. This paper reports on a small study designed to compare a scoring method using model responses at each category to a variation of Millman's suggested alternative. Existing student responses to a fifth grade reading prompt from a large school district's assessment program were used. Twenty volunteers (graduate students) served as raters, and 200 responses to the same prompt were divided into 5 groups of 40 responses. Two raters from each scoring group scored the same 40 papers, allowing the comparison of 2 scores for each response under each scoring condition. No differences were detected between the scoring methods. This may be due to the difficulty of obtaining agreement on borderline responses to be used in training, or it may represent the absence of a consensus on borderline anchor papers. In conclusion, it is stated that no evidence is found to differentiate levels of rater agreement between using judgments of dominance and judgments of proximity. Appendixes present two study scoring rubrics. (Contains one table and nine references.) (SLD)
Publication Type: Reports - Research; Speeches/Meeting Papers
Education Level: N/A
Authoring Institution: N/A
Note: Paper presented at the Annual Meeting of the National Council on Measurement in Education (San Diego, CA, April 14-16, 1998).