NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing all 11 results
Peer reviewed Peer reviewed
Direct linkDirect link
Munter, Charles; Wilhelm, Anne Garrison; Cobb, Paul; Cordray, David S. – Journal of Research on Educational Effectiveness, 2014
This article draws on previously employed methods for conducting fidelity studies and applies them to an evaluation of an unprescribed intervention. We document the process of assessing the fidelity of implementation of the Math Recovery first-grade tutoring program, an unprescribed, diagnostic intervention. We describe how we drew on recent…
Descriptors: Intervention, Program Implementation, Mathematics Education, Educational Diagnosis
Peer reviewed Peer reviewed
Direct linkDirect link
Spybrook, Jessaca; Puente, Anne Cullen; Lininger, Monica – Journal of Research on Educational Effectiveness, 2013
This article examines changes in the research design, sample size, and precision between the planning phase and implementation phase of group randomized trials (GRTs) funded by the Institute of Education Sciences. Thirty-eight GRTs funded between 2002 and 2006 were examined. Three studies revealed changes in the experimental design. Ten studies…
Descriptors: Educational Research, Research Design, Sample Size, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Gelman, Andrew; Hill, Jennifer; Yajima, Masanao – Journal of Research on Educational Effectiveness, 2012
Applied researchers often find themselves making statistical inferences in settings that would seem to require multiple comparisons adjustments. We challenge the Type I error paradigm that underlies these corrections. Moreover we posit that the problem of multiple comparisons can disappear entirely when viewed from a hierarchical Bayesian…
Descriptors: Intervals, Comparative Analysis, Inferences, Error Patterns
Peer reviewed Peer reviewed
Direct linkDirect link
Jo, Booil; Stuart, Elizabeth A. – Journal of Research on Educational Effectiveness, 2012
The authors thank Dr. Lindsay Page for providing a nice illustration of the use of the principal stratification framework to define causal effects, and a Bayesian model for effect estimation. They hope that her well-written article will help expose education researchers to these concepts and methods, and move the field of mediation analysis in…
Descriptors: Bayesian Statistics, Educational Experiments, Educational Research, Observation
Peer reviewed Peer reviewed
Direct linkDirect link
VanderWeele, Tyler J. – Journal of Research on Educational Effectiveness, 2012
Principal stratification provides an approach to study the effect of an exposure on an outcome within strata defined by the effect of the exposure on some third, posttreatment, variable (Frangakis & Rubin, 2002). There has been more recent interest in using principal stratification to study the extent to which the effect of an exposure on an…
Descriptors: Educational Research, Research Methodology, Social Science Research, Principals
Peer reviewed Peer reviewed
Direct linkDirect link
Page, Lindsay C. – Journal of Research on Educational Effectiveness, 2012
Experimental evaluations are increasingly common in the U.S. educational policy-research context. Often, in investigations of multifaceted interventions, researchers and policymakers alike are interested in not only "whether" a given intervention impacted an outcome but also "why". What "features" of the intervention led to the impacts observed,…
Descriptors: Educational Experiments, Educational Research, Research Methodology, Income
Peer reviewed Peer reviewed
Direct linkDirect link
Cowen, Joshua M. – Journal of Research on Educational Effectiveness, 2012
In this article I review the use of randomized field trials to evaluate school voucher interventions. I argue that although estimates of the effect of the voucher offer on achievement are unbiased in these trials, more specific interpretations such as the effect of attending private school may be difficult to obtain. I discuss several evaluation…
Descriptors: Achievement Gains, School Choice, Educational Vouchers, Private Schools
Peer reviewed Peer reviewed
Direct linkDirect link
Jacob, Robin Tepper; Jacob, Brian – Journal of Research on Educational Effectiveness, 2012
Teacher and principal surveys are among the most common data collection techniques employed in education research. Yet there is remarkably little research on survey methods in education, or about the most cost-effective way to raise response rates among teachers and principals. In an effort to explore various methods for increasing survey response…
Descriptors: Principals, Data Collection, Test Theory, Response Rates (Questionnaires)
Peer reviewed Peer reviewed
Direct linkDirect link
Steiner, Peter M. – Journal of Research on Educational Effectiveness, 2012
In this commentary, the author focuses on the use of design elements for increasing the severity of causal mediation tests. The estimation of causal mediation effects from observational data rests on rather stringent assumptions. In introducing and exemplifying ratio-of-mediator-probability weighting (RMPW), Hong and Nomi (henceforth HN) make…
Descriptors: Research Methodology, Test Construction, Test Validity, Causal Models
Peer reviewed Peer reviewed
Direct linkDirect link
Jacob, Robin; Zhu, Pei; Bloom, Howard – Journal of Research on Educational Effectiveness, 2010
This article provides practical guidance for researchers who are designing studies that randomize groups to measure the impacts of educational interventions. The article (a) provides new empirical information about the values of parameters that influence the precision of impact estimates (intraclass correlations and R[superscript 2] values) and…
Descriptors: Research Design, Research Methodology, Educational Research, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Spybrook, Jessaca – Journal of Research on Educational Effectiveness, 2008
This study examines the reporting of power analyses in the group randomized trials funded by the Institute of Education Sciences from 2002 to 2006. A detailed power analysis provides critical information that allows reviewers to (a) replicate the power analysis and (b) assess whether the parameters used in the power analysis are reasonable.…
Descriptors: Statistical Analysis, Correlation, Research Methodology, Research Design