NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
Peer reviewed Peer reviewed
PDF on ERIC Download full text
ERIC Number: EJ1059756
Record Type: Journal
Publication Date: 2015-Apr
Pages: 14
Abstractor: As Provided
ISBN: N/A
ISSN: EISSN-1531-7714
EISSN: N/A
Interrater Reliability in Large-Scale Assessments--Can Teachers Score National Tests Reliably without External Controls?
Pantzare, Anna Lind
Practical Assessment, Research & Evaluation, v20 n9 Apr 2015
In most large-scale assessment systems a set of rather expensive external quality controls are implemented in order to guarantee the quality of interrater reliability. This study empirically examines if teachers' ratings of national tests in mathematics can be reliable without using monitoring, training, or other methods of external quality assurance. A sample of 99 booklets of students' answers to a national test in mathematics was scored by five teachers independently. The interrater reliability was analyzed using consensus and consistency estimates, with the focus on the test as a whole, as well as on individual items. The results show that the estimates are acceptable and in many cases fairly high, irrespective of the reliability measure used. Some plausible explanations for lower interrater reliability in individual items are discussed, and some suggestions are made in the direction of further improving reliability without imposing any system of control.
Center for Educational Assessment. 813 North Pleasant Street, Amherst, MA 01002. e-mail: pare@umass.edu; Tel: 413-577-2180; Web site: https://scholarworks.umass.edu/pare
Publication Type: Journal Articles; Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Identifiers - Location: Sweden
Grant or Contract Numbers: N/A