NotesFAQContact Us
Collection
Advanced
Search Tips
ERIC Number: ED271483
Record Type: RIE
Publication Date: 1986-Apr
Pages: 10
Abstractor: N/A
ISBN: N/A
ISSN: N/A
EISSN: N/A
Rating Format Effects on Rater Agreement and Reliability.
Littlefield, John H.; Troendle, G. Roger
This study compares intra- and inter-rater agreement and reliability when using three different rating form formats to assess the same stimuli. One format requests assessment by marking detailed criteria without an overall judgement; the second format requests only an overall judgement without the use of detailed criteria; and the third format combines detailed criteria with an overall judgement. Results are interpreted from a cognitive processing theoretical framework. Subjects were five full-time and three part-time dental faculty members. The experimental task was to evaluate five crown preparations during six trials using each of three different rating forms, but raters were not informed they were reevaluating the same teeth. Raters were assigned code numbers to maintain anonymity, and teeth were identified only by code numbers. Data analysis was based upon ratings of five teeth from trials one through six; the trials were six weeks apart. Inter-rater agreement among the eight raters was distressingly low, but was in the range of one previous report. The study suggests that the traditional practice of scoring performance ratings by summary across multiple criteria may reduce intra-rater reliability. Rating forms which are structured to parallel rater cognitive processes may result in more reproducible scores than traditional summation scoring methods. (LMO)
Publication Type: Speeches/Meeting Papers; Reports - Research
Education Level: N/A
Audience: Researchers
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A
Note: Paper presented at the Annual Meeting of the American Educational Research Association (70th, San Francisco, CA, April 16-20, 1986).