Download full text
Download full text
ERIC Number: ED562174
Record Type: Non-Journal
Publication Date: 2015
Reference Count: 7
Evaluating the Performance of Repeated Measures Approaches in Replicating Experimental Benchmark Results
McConeghy, Kevin; Wing, Coady; Wong, Vivian C.
Society for Research on Educational Effectiveness
Randomized experiments have long been established as the gold standard for addressing causal questions. However, experiments are not always feasible or desired, so observational methods are also needed. When multiple observations on the same variable are available, a repeated measures design may be used to assess whether a treatment administered at a known time results in changes in the outcome. Despite the popularity of repeated measures approaches for assessing policy impacts, questions remain about the empirical performance of these approaches in field settings. In within-study comparison (WSC) designs, the quasi-experimental (QE) approach is evaluated by comparing QE results with those from a benchmark design that shares the same treatment group. The purpose of this WSC is to examine the following three methodological questions: (1) Does the simple interrupted time series (ITS) produce unbiased treatment effects, relative to an experimental benchmark?; (2) Do the comparative ITS and differences-in-differences (DID) approaches produce unbiased treatment effects, relative to an experimental benchmark?; and (3) Do the use of multiple in-state and out-of-state non-equivalent comparison groups rule out plausible threats to validity in the comparative ITS and DID designs? This study employs experimental data from the Cash and Counseling Demonstration Project (Carlson, Foster, Dale, & Brown, 2007), which evaluated the effects of a "consumer-directed" care program on Medicaid recipients' outcomes. The data include monthly Medicaid expenditures for 12 months prior to the intervention (pretest), and 12 months after the intervention (posttest). Medicaid participants in Arkansas, New Jersey, and Florida were randomly assigned to treatment and control conditions, where the treatment consisted of Medicaid recipients selecting their own services using a Medicaid-funded account, and the control consisted of local agencies selecting services for Medicaid recipients. Findings include: (1) Consumer-directed care resulted in significant increases in Medicaid expenditures for all subgroups at each follow-up time period; (2) QE methods performed better at the earlier time points than at the later time points that required more extrapolation of the regression function; (3) The models that allowed for maximum flexibility also tended to produced less precise estimates; (4) The WSC results indicate that within-state comparisons performed better than cross-state comparison groups, and that gender-based comparisons produced less bias than age-based comparisons; and (5) The cross-state comparisons performed relatively well when the model was adjusted for more complicated trends. Tables are appended.
Descriptors: Replication (Evaluation), Benchmarking, Quasiexperimental Design, Comparative Analysis, Bias, Outcomes of Treatment, Validity, Demonstration Programs, Health Services, Intervention, Pretests Posttests, Randomized Controlled Trials
Society for Research on Educational Effectiveness. 2040 Sheridan Road, Evanston, IL 60208. Tel: 202-495-0920; Fax: 202-640-4401; e-mail: firstname.lastname@example.org; Web site: http://www.sree.org
Publication Type: Reports - Research
Education Level: N/A
Authoring Institution: Society for Research on Educational Effectiveness (SREE)
Identifiers - Location: Arkansas; Florida; New Jersey