NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 58 results
Peer reviewed Peer reviewed
Direct linkDirect link
Jacob, Robin T.; Goddard, Roger D.; Kim, Eun Sook – Educational Evaluation and Policy Analysis, 2014
It is often difficult and costly to obtain individual-level student achievement data, yet, researchers are frequently reluctant to use school-level achievement data that are widely available from state websites. We argue that public-use aggregate school-level achievement data are, in fact, sufficient to address a wide range of evaluation questions…
Descriptors: Academic Achievement, Data, Information Utilization, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Koski, William S.; Horng, Elieen L. – Educational Evaluation and Policy Analysis, 2014
In this invited response to Moe and Anzia (2014), we describe both the points of convergence and divergence between our prior research (2007a, 2007b) and that of Moe (2005) and Moe and Anzia (2014). We also respond to Moe and Anzia's critique of our published work. Moe and Anzia's study helps to refine the policy discussion around…
Descriptors: Collective Bargaining, Teacher Competencies, Disadvantaged Schools, Teacher Transfer
Peer reviewed Peer reviewed
Direct linkDirect link
Anzia, Sarah F.; Moe, Terry M. – Educational Evaluation and Policy Analysis, 2014
In this article, the authors Sarah A. Anzia and Terry M. Moe, offer a retort to William S. Koski and Elieen L. Horng's argument that their study of seniority-based transfer rules is narrow and that its findings only apply under limited circumstances. The Koski-Horng's study, by contrast, takes a broad frame--so broad that, as detailed in…
Descriptors: Teacher Transfer, Status, Teaching Experience, Disadvantaged Schools
Peer reviewed Peer reviewed
Direct linkDirect link
Kelcey, Ben; Phelps, Geoffrey – Educational Evaluation and Policy Analysis, 2013
Despite recent shifts in research emphasizing the value of carefully designed experiments, the number of studies of teacher professional development with rigorous designs has lagged behind its student outcome counterparts. We outline a framework for the design of group randomized trials (GRTs) with teachers' knowledge as the outcome and…
Descriptors: Research Design, Faculty Development, Educational Research, Reading
Peer reviewed Peer reviewed
Direct linkDirect link
Frank, Kenneth A.; Maroulis, Spiro J.; Duong, Minh Q.; Kelcey, Benjamin M. – Educational Evaluation and Policy Analysis, 2013
We contribute to debate about causal inferences in educational research in two ways. First, we quantify how much bias there must be in an estimate to invalidate an inference. Second, we utilize Rubin's causal model to interpret the bias necessary to invalidate an inference in terms of sample replacement. We apply our analysis to an inference…
Descriptors: Causal Models, Inferences, Research Methodology, Robustness (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Banks, George C.; Kepes, Sven; Banks, Karen P. – Educational Evaluation and Policy Analysis, 2012
This article offers three contributions for conducting meta-analytic reviews in education research. First, we review publication bias and the challenges it presents for meta-analytic researchers. Second, we review the most recent and optimal techniques for evaluating the presence and influence of publication bias in meta-analyses. We then…
Descriptors: Meta Analysis, Educational Research, Bias, Publishing Industry
Peer reviewed Peer reviewed
Direct linkDirect link
Zhu, Pei; Jacob, Robin; Bloom, Howard; Xu, Zeyu – Educational Evaluation and Policy Analysis, 2012
This paper provides practical guidance for researchers who are designing and analyzing studies that randomize schools--which comprise three levels of clustering (students in classrooms in schools)--to measure intervention effects on student academic outcomes when information on the middle level (classrooms) is missing. This situation arises…
Descriptors: Educational Research, Educational Researchers, Research Methodology, Multivariate Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Song, Mengli; Herman, Rebecca – Educational Evaluation and Policy Analysis, 2010
Drawing on our five years of experience developing WWC evidence standards and reviewing studies against those standards as well as current literature on the design of impact studies, we highlight in this paper some of the most critical issues and common pitfalls in designing and conducting impact studies in education, and provide practical…
Descriptors: Clearinghouses, Program Evaluation, Program Effectiveness, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Domina, Thurston; Ghosh-Dastidar, Bonnie; Tienda, Marta – Educational Evaluation and Policy Analysis, 2010
The No Child Left Behind Act requires states to publish high school graduation rates for public schools; the U.S. Department of Education is currently considering a mandate to standardize high school graduation rate reporting. However, no consensus exists among researchers or policymakers about how to measure high school graduation rates. We use…
Descriptors: High Schools, Graduation Rate, Academic Persistence, Longitudinal Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Harris, Douglas N. – Educational Evaluation and Policy Analysis, 2009
The common reporting of effect sizes has been an important advance in education research in recent years. However, the benchmarks used to interpret the size of these effects--as small, medium, and large--do little to inform educational administration and policy making because they do not account for program costs. The author proposes an approach…
Descriptors: Class Size, Educational Policy, Effect Size, Cost Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Slavin, Robert; Smith, Dewi – Educational Evaluation and Policy Analysis, 2009
Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…
Descriptors: Sample Size, Effect Size, Correlation, Educational Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Kennedy, Mary M. – Educational Evaluation and Policy Analysis, 2008
The influence of teachers' qualifications on their teaching practice has been subject to debate. Literature reviews do not settle these debates, partly because the literature is uneven and partly because reviews capture only narrow slices of literature. In particular, many reviews eliminate qualitative studies. Yet without examining qualitative…
Descriptors: Qualitative Research, Teacher Qualifications, Educational Research, Teacher Education Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Hedges, Larry V.; Hedberg, E. C. – Educational Evaluation and Policy Analysis, 2007
Experiments that assign intact groups to treatment conditions are increasingly common in social research. In educational research, the groups assigned are often schools. The design of group-randomized experiments requires knowledge of the intraclass correlation structure to compute statistical power and sample sizes required to achieve adequate…
Descriptors: Educational Research, Academic Achievement, Correlation, Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Bloom, Howard S.; Richburg-Hayes, Lashawn; Black, Alison Rebeck – Educational Evaluation and Policy Analysis, 2007
This article examines how controlling statistically for baseline covariates, especially pretests, improves the precision of studies that randomize schools to measure the impacts of educational interventions on student achievement. Empirical findings from five urban school districts indicate that (1) pretests can reduce the number of randomized…
Descriptors: Urban Schools, Pretests Posttests, Educational Change, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Confrey, Jere – Educational Evaluation and Policy Analysis, 2006
This article summarizes the findings of the National Research Council (NRC) report "On Evaluating Curricular Effectiveness" and examines the reviews in middle grades mathematics undertaken by the What Works Clearinghouse (WWC). The NRC report reviewed and assessed 147 key evaluations of 13 National Science Foundation-supported K-12 mathematics…
Descriptors: Program Effectiveness, Elementary Secondary Education, Quasiexperimental Design, Content Analysis
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4