NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 20 results
Peer reviewed Peer reviewed
Direct linkDirect link
Guarino, Cassandra M.; Reckase, Mark D.; Stacy, Brian W.; Wooldridge, Jeffrey M. – Journal of Research on Educational Effectiveness, 2015
We study the properties of two specification tests that have been applied to a variety of estimators in the context of value-added measures (VAMs) of teacher and school quality: the Hausman test for choosing between student-level random and fixed effects, and a test for feedback (sometimes called a "falsification test"). We discuss…
Descriptors: Teacher Effectiveness, Educational Quality, Evaluation Methods, Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Zamarro, Gema; Engberg, John; Saavedra, Juan Esteban; Steele, Jennifer – Journal of Research on Educational Effectiveness, 2015
This article investigates the use of teacher value-added estimates to assess the distribution of effective teaching across students of varying socioeconomic disadvantage in the presence of classroom composition effects. We examine, via simulations, how accurately commonly used teacher value-added estimators recover the rank correlation between…
Descriptors: Teacher Effectiveness, Disadvantaged Youth, Socioeconomic Influences, Socioeconomic Status
Peer reviewed Peer reviewed
Direct linkDirect link
Rhoads, Christopher – Journal of Research on Educational Effectiveness, 2014
Recent publications have drawn attention to the idea of utilizing prior information about the correlation structure to improve statistical power in cluster randomized experiments. Because power in cluster randomized designs is a function of many different parameters, it has been difficult for applied researchers to discern a simple rule explaining…
Descriptors: Correlation, Statistical Analysis, Multivariate Analysis, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Denton, Carolyn A.; Fletcher, Jack M.; Taylor, W. Pat; Barth, Amy E.; Vaughn, Sharon – Journal of Research on Educational Effectiveness, 2014
Considerable research evidence supports the provision of explicit instruction for students at risk for reading difficulties; however, one of the most widely implemented approaches to early reading instruction is Guided Reading (GR; Fountas & Pinnel, 1996), which deemphasizes explicit instruction and practice of reading skills in favor of…
Descriptors: Elementary School Students, At Risk Students, Reading Difficulties, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Spybrook, Jessaca; Hedges, Larry; Borenstein, Michael – Journal of Research on Educational Effectiveness, 2014
Research designs in which clusters are the unit of randomization are quite common in the social sciences. Given the multilevel nature of these studies, the power analyses for these studies are more complex than in a simple individually randomized trial. Tools are now available to help researchers conduct power analyses for cluster randomized…
Descriptors: Statistical Analysis, Research Design, Vocabulary, Coding
Peer reviewed Peer reviewed
Direct linkDirect link
Spybrook, Jessaca; Puente, Anne Cullen; Lininger, Monica – Journal of Research on Educational Effectiveness, 2013
This article examines changes in the research design, sample size, and precision between the planning phase and implementation phase of group randomized trials (GRTs) funded by the Institute of Education Sciences. Thirty-eight GRTs funded between 2002 and 2006 were examined. Three studies revealed changes in the experimental design. Ten studies…
Descriptors: Educational Research, Research Design, Sample Size, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Dong, Nianbo; Maynard, Rebecca – Journal of Research on Educational Effectiveness, 2013
This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…
Descriptors: Effect Size, Sample Size, Research Design, Quasiexperimental Design
Peer reviewed Peer reviewed
Direct linkDirect link
Bloom, Howard S. – Journal of Research on Educational Effectiveness, 2012
This article provides a detailed discussion of the theory and practice of modern regression discontinuity (RD) analysis for estimating the effects of interventions or treatments. Part 1 briefly chronicles the history of RD analysis and summarizes its past applications. Part 2 explains how in theory an RD analysis can identify an average effect of…
Descriptors: Regression (Statistics), Research Design, Cutting Scores, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Reardon, Sean F.; Robinson, Joseph P. – Journal of Research on Educational Effectiveness, 2012
In the absence of a randomized control trial, regression discontinuity (RD) designs can produce plausible estimates of the treatment effect on an outcome for individuals near a cutoff score. In the standard RD design, individuals with rating scores higher than some exogenously determined cutoff score are assigned to one treatment condition; those…
Descriptors: Regression (Statistics), Research Design, Cutting Scores, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Bloom, Howard S. – Journal of Research on Educational Effectiveness, 2012
In this article, the author shares his comments on statistical analysis for multisite trials, and focuses on the contribution of Stephen Raudenbush, Sean Reardon, and Takako Nomi to future research. Raudenbush, Reardon, and Nomi provide a major contribution to future research on variation in program impacts by showing how to use multisite trials…
Descriptors: Program Evaluation, Statistical Analysis, Computation, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Imai, Kosuke – Journal of Research on Educational Effectiveness, 2012
The author begins this discussion by thanking Larry Hedges, the editor of the journal, for giving him an opportunity to provide a commentary on this stimulating article. He also would like to congratulate the authors of the article for their insightful discussion on causal mediation analysis, which is one of the most important and challenging…
Descriptors: Statistical Analysis, Attribution Theory, Probability, Statistical Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Raudenbush, Stephen W.; Reardon, Sean F.; Nomi, Takako – Journal of Research on Educational Effectiveness, 2012
Multisite trials can clarify the average impact of a new program and the heterogeneity of impacts across sites. Unfortunately, in many applications, compliance with treatment assignment is imperfect. For these applications, we propose an instrumental variable (IV) model with person-specific and site-specific random coefficients. Site-specific IV…
Descriptors: Program Evaluation, Statistical Analysis, Hierarchical Linear Modeling, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Carlisle, Joanne F.; Kelcey, Ben; Rowan, Brian; Phelps, Geoffrey – Journal of Research on Educational Effectiveness, 2011
This study developed a new survey of teachers' knowledge about early reading and examined the effects of teachers' knowledge on students' reading achievement in Grades 1 to 3 in a large sample of Michigan schools. Using statistical models that controlled for teachers' personal and professional characteristics, students' prior reading achievement,…
Descriptors: Reading Comprehension, Early Reading, Reading Achievement, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Konstantopoulos, Spyros – Journal of Research on Educational Effectiveness, 2011
Field experiments that involve nested structures frequently assign treatment conditions to entire groups (such as schools). A key aspect of the design of such experiments includes knowledge of the clustering effects that are often expressed via intraclass correlation. This study provides methods for constructing a more powerful test for the…
Descriptors: Correlation, Field Studies, Experiments, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Weiss, Michael J. – Journal of Research on Educational Effectiveness, 2010
In some experimental evaluations of classroom-level interventions it is not practically feasible to randomly assign teachers to experimental conditions. Given such restrictions, researchers may randomly assign students to experimental conditions and consider the teacher to be a part of the intervention. However, in an individually randomized…
Descriptors: Control Groups, Research Design, Intervention, Teacher Selection
Previous Page | Next Page ยป
Pages: 1  |  2