NotesFAQContact Us
Collection
Advanced
Search Tips
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: EJ993324
Record Type: Journal
Publication Date: 2012
Pages: 15
Abstractor: As Provided
Reference Count: 44
ISBN: N/A
ISSN: ISSN-1556-8180
Utilizing Generalizability Theory to Investigate the Reliability of the Grades Assigned to Undergraduate Research Papers
Gugiu, Mihaiela R.; Gugiu, Paul C.; Baldus, Robert
Journal of MultiDisciplinary Evaluation, v8 n19 p26-40 2012
Background: Educational researchers have long espoused the virtues of writing with regard to student cognitive skills. However, research on the reliability of the grades assigned to written papers reveals a high degree of contradiction, with some researchers concluding that the grades assigned are very reliable whereas others suggesting that they are so unreliable that random assignment of grades would have been almost as helpful. Purpose: The primary purpose of the study was to investigate the reliability of grades assigned to written reports. The secondary purpose was to illustrate the use of Generalizability Theory, specifically the fully-crossed two-facet model, for computing interrater reliability coefficients. Setting: The participants for this study were 29 undergraduate students enrolled in an introductory-level course on Political Behavior in Spring 2011 at a Midwest university. Intervention: Not applicable. Research Design: Students were randomly assigned to one of nine groups. Two-facet fully crossed G-study and D-study designs were used wherein two raters graded four assignments for 9 student groups--72 evaluations in total. The universe of admissible observations was deemed to be random for both raters and assignments, whereas the universe of generalization was deemed to be mixed (random for two raters but fixed for four assignments). Data Collection and Analysis: The semester-long project was assigned to groups consisting of an annotated bibliography, survey development, sampling design, and analysis and final report. Four grading rubrics were developed and utilized to evaluate the quality of each written report. Two-facet generalizability analyses were conducted to assess interrater reliability using software developed by one of the authors. Findings: This study found a very high interrater reliability coefficient (0.929) for only two raters who received no training in how to use the four grading rubrics. (Contains 2 tables, 2 figures and 7 footnotes.)
Evaluation Center, Western Michigan University. 1903 West Michigan Avenue, Kalamazoo, MI 49008-5237. Tel: 269-387-5906; Fax: 269-387-5923; e-mail: eval-center@wmich.edu; Web site: http://www.jmde.com
Publication Type: Journal Articles; Reports - Research
Education Level: Higher Education
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A