NotesFAQContact Us
Collection
Advanced
Search Tips
ERIC Number: ED400277
Record Type: Non-Journal
Publication Date: 1996-Apr
Pages: 22
Abstractor: N/A
Reference Count: N/A
ISBN: N/A
ISSN: N/A
Optimal Designs for Performance Assessments: The Subject Factor.
Parkes, Jay
Much speculation abounds concerning how expensive performance assessments are or are going to be. Recent projections indicate that, in order to achieve an acceptably high generalizability coefficient, many additional tasks may need to be added, which will enlarge costs. Such projections are, to some degree, correct, and to some degree simplistic. The current investigation uses two synthetic examples, based on published costs and variance components, and a constrained optimization procedure to examine the complex relationships among reliability, cost, and sample size. The first example is a limited writing sample situation, and the second is a large-scale portfolio assessment. Results indicate that the optimal design changes as the number of subjects changes. Another set of results confirms what seems to be expected intuitively: as the number of subjects grows, the relatively fixed development cost becomes a smaller and smaller percentage of the total cost. These two sets of results seem to be related directly. Since, for the smaller samples, development costs constitute the majority of total cost, the optimal design includes more raters than prompts. That is, the burden of reliability is shifted to the least expensive (in relative terms) part of the assessment. (Contains 2 figures, 4 tables, and 14 references.) (Author/SLD)
Publication Type: Reports - Evaluative; Speeches/Meeting Papers
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A