NotesFAQContact Us
Search Tips
ERIC Number: ED334269
Record Type: Non-Journal
Publication Date: 1991-Feb
Pages: 19
Abstractor: N/A
Reference Count: N/A
Analysis of Interrater Reliability on the Evaluation of Answers to Open-Ended Questions.
Crews, William E., Jr.
As part of a study of teacher evaluation of student replies to open-ended questions, a second question--the best method of determining interrater reliability--was examined. The standard method, the Pearson Product-Moment correlation, overestimated the degree of match between researchers' and teachers' scoring of tests. The simpler percent agreement method tended to underestimate agreement. Scores were derived from two science teachers who were teaching a total of five eighth grade classes. Twenty students in each of the classes took the study tests at any one time. Two tests were used (the Earth and Moon Test and the Earth and Sun Test), which each contained 13 open-ended questions. The researcher and teachers evaluated the tests separately. Only the total number of correct answers for each student was used in the statistical analyses. Matched t-tests showed that most of the sets of scores could not be considered the same. A second sum of squares was derived fitted to the researcher's scores. A ratio of the two sums produced a fraction comparable to the correlation coefficient. Teachers usually evaluated answers adequately, but tended to count more answers correct than did the researcher for the higher ability classes. It is concluded that the halo effect was in operation; teachers should be warned to anticipate this type of bias. Four data tables are included. An appendix presents the instruments used. (SLD)
Publication Type: Reports - Research; Speeches/Meeting Papers; Tests/Questionnaires
Education Level: N/A
Audience: N/A
Language: English
Sponsor: North Carolina State Univ., Raleigh. School of Education and Psychology.; North Carolina State Univ., Raleigh. Center for Research in Mathematics and Science Education.
Authoring Institution: N/A