ERIC Number: ED462388
Record Type: Non-Journal
Publication Date: 1994-Nov
Reference Count: N/A
Issues in Portfolio Assessment: The Scorability of Narrative Collections. Project 3.1: Studies in Improving Classroom and Local Assessments.
Gearhart, Maryl; Novak, John R.; Herman, Joan L.
Technical questions regarding the reliability and validity of large-scale portfolio assessment were studied which focused on: (1) whether raters can score collections of writing reliably with rubrics designed for single samples; (2) whether ratings derived from different frameworks differ in their capacities to support technically sound assessments of narrative collections; and (3) whether ratings of distinctive narrative assessments characterize groups similarly. The study used 5 raters' judgments of 52 collections of elementary school student writing and was primarily designed to illustrate analytic techniques for addressing each of these questions. Another objective was to evaluate the "Writing What You Read" narrative rubric. The study produced preliminary evidence that the holistic scale of "Writing What You Read" can be used reliably and meaningfully in large-scale assessment of narrative collections. Results support the importance of rubrics designed to capture the qualities of distinctive writing genres. Seven figures and 29 tables present study findings, and 2 appendixes present writing prompts. (Contains 12 references.) (SLD)
Descriptors: Educational Assessment, Elementary Education, Elementary School Students, Essay Tests, Evaluation Methods, Interrater Reliability, Judges, Portfolio Assessment, Portfolios (Background Materials), Scoring, State Programs, Test Reliability, Test Use, Test Validity, Testing Programs, Writing (Composition), Writing Evaluation
Publication Type: Numerical/Quantitative Data; Reports - Research
Education Level: N/A
Sponsor: Office of Educational Research and Improvement (ED), Washington, DC.
Authoring Institution: Center for Research on Evaluation, Standards, and Student Testing, Los Angeles, CA.