NotesFAQContact Us
Collection
Advanced
Search Tips
ERIC Number: ED503778
Record Type: Non-Journal
Publication Date: 2005
Pages: 22
Abstractor: As Provided
Reference Count: 15
ISBN: N/A
ISSN: N/A
The Effect of Year-to-Year Rater Variation on IRT Linking
Yen, Shu Jing; Ochieng, Charles; Michaels, Hillary; Friedman, Greg
Online Submission, Paper presented at the Annual Meeting of the American Educational Research Association (Montreal, Canada, 2005)
Year-to-year rater variation may result in constructed response (CR) parameter changes, making CR items inappropriate to use in anchor sets for linking or equating. This study demonstrates how rater severity affected the writing and reading scores. Rater adjustments were made to statewide results using an item response theory (IRT) methodology based on the work of Tate (1999, 2000). The common item equating design was used to place the second year scores to the first year scores after a re-score of the first year test in order to adjust for rater effects. Two samples of data from contiguous years, designated as Year 1 (n ~ 1,200) and Year 2 (n ~ 2,000), from the writing and reading portions of a statewide assessment were examined. The writing test consisted of 32, 36, and 40 selected-response items for grade 4, 6, and 8 and a single writing prompt scored on a six-point scale (0-5) scored by two raters whose scores are added for a composite. The reading test consists of 75, 93, and 91 selected-response items and 12, 14, and 16 constructed response items for grade 4, 6, and 8, respectively. All the CR items in reading were scored on a three-point scale (0-2.) The resulting item parameters were compared between year one and two, with and without rater adjustment. For writing, there were significant shifts in the parameters after rater adjustment. The p-values and TCCs shifted across years when adjusted for rater effects. The impact of the parameter shifts and TCCs manifested in the changes in the proficiency classification before and after adjustment. The results of the study suggests that raters were not consistently more severe or more lenient between grades or content areas, but the resulting rater error (severity or leniency) affected the scores and thereby produced misleading results if not taken into account. (Contains 6 tables and 6 figures.)
Publication Type: Reports - Research; Speeches/Meeting Papers
Education Level: Grade 4; Grade 6; Grade 8
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A