NotesFAQContact Us
Collection
Advanced
Search Tips
ERIC Number: ED506844
Record Type: Non-Journal
Publication Date: 2007-Jun
Pages: 65
Abstractor: ERIC
Reference Count: N/A
ISBN: N/A
ISSN: N/A
Estimating Effects of Non-Participation on State NAEP Scores Using Empirical Methods
Grissmer, David
American Institutes for Research
The NAEP Validity Studies (NVS) Panel provides advice and recommendations to help insure the "validity" of the National Assessment of Educational Progress (NAEP) test scores. The primary objectives of NAEP tests are to accurately monitor the progress of defined groups of students over time and to measure valid differences in scores between student groups at a single point in time. In this context, valid scores reflect differences in scores that are linked to "real" differences in student knowledge as measured on achievement tests. Several threats exist that can change scores so that differences are not linked to actual student knowledge. Scores can vary due to a variety of more or less random factors linked to sampling, administration, student motivation, question selection and distribution among test booklets, success at guessing, and other factors. These factors are not particularly problematic if they are caused by truly random events, particularly if such random variation is captured and estimated by standard errors. The more problematic changes in scores are those caused by factors that can systematically bias the scores among student groups taking the test at a single point in time, or to bias the scores of student groups taking tests at two points in time. Two of the main threats that might cause such bias are "differential student exclusion" from NAEP testing and "differential participation of selected schools and students" in NAEP testing. This study takes up a threat to the validity of scores arising from differential and changing participation rates of schools and students in NAEP testing. The current study explores the extent to which common mechanisms might exist across states that explain patterns of non-participation, and if such mechanisms exist, to estimate their potential for bias. If mechanisms are operating within each state that partially determine the level of non-participation, then non-participation becomes a factor in explaining the pattern of NAEP scores across states. Empirically estimated models that explain the pattern of state scores across 17 tests from 1990-2003 might then provide an estimate of the effect on state scores of changing patterns of non-participation. Specifically, this study has four objectives: (1) To compile and examine student and school non-participation rates across state NAEP tests from 1990-2003 and assess whether common factors are present that might explain non-participation patterns across states and their potential for bias; (2) To treat the 2002-2003 4th and 8th grade state scores as a natural experiment to estimate the extent of possible bias; (3) To develop statistical models that account for the pattern of state NAEP scores for 696 state scores from 1990-2003, and to assess whether the pattern of non-participation is a significant explanatory factor in this pattern of state NAEP scores; and (4) To compare estimates of bias from these methods to the bias from worst case scenarios estimated by McLaughlin, 2004. Appended are: (1) Student and School Participation Rates for 17 State NAEP Tests from 1990-2003; and (2) Regression Results. (Contains 21 exhibits and a bibliography.)
American Institutes for Research. 1000 Thomas Jefferson Street NW, Washington, DC 20007. Tel: 202-403-5000; Fax: 202-403-5001; e-mail: inquiry@air.org; Web site: http://www.air.org
Publication Type: Numerical/Quantitative Data; Reports - Evaluative
Education Level: Grade 4; Grade 8
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: American Institutes for Research
Identifiers - Laws, Policies, & Programs: Rehabilitation Act 1973 (Section 504)