ERIC Number: ED287175
Record Type: RIE
Publication Date: 1987-Mar-20
Reference Count: N/A
Improving Interrater Reliability.
Atkinson, Dianne; Murray, Mary
Noting that improvement in rater reliability means eliminating differences among raters, this paper discusses ways to assess writing evaluator reliability and methods for achieving higher levels of interrater reliability. After showing that reliability can be improved two ways--by increasing the number of raters or measurements made, and by increasing the systematic variance among essays relative to error variance--the paper cites common problems in reporting and assessing reliability. The paper then recommends that researchers (1) use an "analysis of variance" approach in assessing reliability; (2) indicate the number of independent observations; (3) use a two-way analysis of variance if more than one dimension is rated; (4) use "repeated measures" analysis of variance if rating more than one sample per student; and (5) use an "intraclass correlation coefficient" such as coefficient alpha in reports of research, or the "Pearson r" when two raters rate one dimension of the sample. Finally, the paper describes methods to increase interrater reliability such as controlling the range and quality of sample papers, specifying the scoring task through clearly defined objective categories, choosing raters familiar with the constructs to be identified, and training the raters in systematic practice sessions. (Formulas for calculating reliability and training procedures for raters are included.) (JG)
Publication Type: Opinion Papers; Speeches/Meeting Papers
Education Level: N/A
Authoring Institution: N/A
Note: Paper presented at the Annual Meeting of the Conference on College Composition and Communication (38th, Atlanta, GA, March 19-21, 1987).