NotesFAQContact Us
Collection
Advanced
Search Tips
Peer reviewed Peer reviewed
PDF on ERIC Download full text
ERIC Number: ED531481
Record Type: Non-Journal
Publication Date: 2012-Apr
Pages: 68
Abstractor: As Provided
Reference Count: 39
ISBN: N/A
ISSN: N/A
Using an Experimental Evaluation of Charter Schools to Test Whether Nonexperimental Comparison Group Methods Can Replicate Experimental Impact Estimates. NCEE 2012-4019
Fortson, Kenneth; Verbitsky-Savitz, Natalya; Kopa, Emma; Gleason, Philip
National Center for Education Evaluation and Regional Assistance
Randomized controlled trials (RCTs) are widely considered to be the gold standard in evaluating the impacts of a social program. When an RCT is infeasible, researchers often estimate program impacts by comparing outcomes of program participants with those of a nonexperimental comparison group, adjusting for observable differences between the two groups. Nonexperimental comparison group methods could produce unbiased estimates if the underlying assumptions hold, but those assumptions are usually not testable in practice. Prior studies generally find that nonexperimental designs fail to produce unbiased estimates. However, these studies have been criticized for using only limited pre-intervention data, measuring outcomes and covariates inconsistently for different research groups, or drawing comparison groups from dissimilar populations. The present study was designed to address these challenges. We test the validity of four different comparison group approaches--OLS regression modeling, exact matching, propensity score matching, and fixed effects modeling--comparing nonexperimental impact estimates from these methods with an experimental benchmark. The analysis uses data from an experimental evaluation of charter schools and comparison data for other students in the same school districts in the baseline period. We find that the use of pre-intervention baseline data that are strongly predictive of the key outcome measures considerably reduces but might not completely eliminate bias. Regression-based nonexperimental impact estimates are significantly different from experimental impact estimates, though the magnitude of the difference is modest. In this study, matching estimators perform slightly better than do estimators that rely on parametric assumptions and generate impact estimates that are not significantly different from the experimental estimates. However, the matching and regression-based estimates are not greatly different from one another. These findings are robust to restrictions on the comparison group used, the modeling specifications employed, and the data assumed to be available. Appended are: (1) Sample Weights; and (2) Supplemental Tables. (Contains 20 tables, 1 figure and 35 footnotes.)
National Center for Education Evaluation and Regional Assistance. Available from: ED Pubs. P.O. Box 1398, Jessup, MD 20794-1398. Tel: 877-433-7827; Web site: http://ies.ed.gov/ncee/
Publication Type: Reports - Research
Education Level: Grade 5; Grade 6; Grade 7; Junior High Schools; Middle Schools
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: National Center for Education Evaluation and Regional Assistance (ED)
IES Funded: Yes