NotesFAQContact Us
Search Tips
ERIC Number: ED459212
Record Type: Non-Journal
Publication Date: 2001-Jun
Pages: 40
Abstractor: N/A
Reference Count: N/A
Judging Evidence of the Validity of the National Assessment of Educational Progress Achievement Levels.
Loomis, Susan Cooper
This paper describes (1) the procedures developed to set achievement levels for the National Assessment of Educational Progress (NAEP) that contribute to establishing the validity of the levels and (2) the research studies designed to collect information related to the validity of the achievement levels and the outcomes of the process. The central issue in examining the validity of standards is whether there is evidence of procedural validity. The standards must be generally accepted as reasonable for the outcomes of the process for setting cutpoints to be valid. For each of the three American College Testing (ACT, Inc.) program contracts with the National Assessment Governing Board, the process of developing achievement levels descriptions has been different, as described, but in all cases there has been an effort to solicit broad-based commentary about the reasonableness of the achievement level descriptions. The selection of the panelists is important, since standard setting panels must be seen as credible. The paper describes the selection of panelists, field trials and pilot studies, training for facilitators, and panelist training. Several different rating methodologies have been evaluated and tested in panel studies for the NAEP achievement level process, but the modified Angoff method has the most solid research base in standard setting. Panelists participate in three rounds of item-by-item ratings with a variety of feedback after each round completing evaluations throughout the process. ACT, Inc. has performed various types of evaluations of the standard setting process and data. These include analyses of standard-setting data that are somewhat standard and research studies related to validation that are further divided into studies using item mapping procedures, studies comparing teachers' judgments of performance to empirical classifications of student performance, and studies comparing judgments of performance represented in test booklets to the empirical classification of these booklets. In the end, there is no way to know with certainty that cutscores are valid, although substantial effort goes into ensuring procedural validity. (Contains 21 tables and 50 references.) (SLD)
Publication Type: Reports - Descriptive; Speeches/Meeting Papers
Education Level: N/A
Audience: N/A
Language: English
Sponsor: National Assessment Governing Board, Washington, DC.
Authoring Institution: ACT, Inc., Iowa City, IA.
Identifiers - Assessments and Surveys: National Assessment of Educational Progress