Publication Date
| In 2015 | 0 |
| Since 2014 | 1 |
| Since 2011 (last 5 years) | 1 |
| Since 2006 (last 10 years) | 8 |
| Since 1996 (last 20 years) | 8 |
Descriptor
| Intervention | 7 |
| Scores | 6 |
| Computation | 5 |
| Statistical Analysis | 5 |
| Regression (Statistics) | 4 |
| Causal Models | 3 |
| Control Groups | 3 |
| Models | 3 |
| Pretests Posttests | 3 |
| Research Design | 3 |
| More ▼ | |
Author
| Schochet, Peter Z. | 8 |
| Chiang, Hanley | 1 |
| Chiang, Hanley S. | 1 |
| Deke, John | 1 |
| Puma, Mike | 1 |
Publication Type
| Reports - Evaluative | 7 |
| Information Analyses | 1 |
| Reports - Descriptive | 1 |
Education Level
| Elementary Education | 2 |
| Elementary Secondary Education | 2 |
Audience
| Policymakers | 1 |
| Researchers | 1 |
| Teachers | 1 |
Showing all 8 results
Schochet, Peter Z.; Puma, Mike; Deke, John – National Center for Education Evaluation and Regional Assistance, 2014
This report summarizes the complex research literature on quantitative methods for assessing how impacts of educational interventions on instructional practices and student learning differ across students, educators, and schools. It also provides technical guidance about the use and interpretation of these methods. The research topics addressed…
Descriptors: Statistical Analysis, Evaluation Methods, Educational Research, Intervention
Schochet, Peter Z.; Chiang, Hanley S. – National Center for Education Evaluation and Regional Assistance, 2010
This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…
Descriptors: Teacher Effectiveness, Teacher Evaluation, Student Evaluation, Scores
Schochet, Peter Z. – National Center for Education Evaluation and Regional Assistance, 2009
This paper examines the estimation of two-stage clustered RCT designs in education research using the Neyman causal inference framework that underlies experiments. The key distinction between the considered causal models is whether potential treatment and control group outcomes are considered to be fixed for the study population (the…
Descriptors: Control Groups, Causal Models, Statistical Significance, Computation
Schochet, Peter Z. – National Center for Education Evaluation and Regional Assistance, 2009
For RCTs of education interventions, it is often of interest to estimate associations between student and mediating teacher practice outcomes, to examine the extent to which the study's conceptual model is supported by the data, and to identify specific mediators that are most associated with student learning. This paper develops statistical power…
Descriptors: Statistical Analysis, Intervention, Teacher Influence, Teaching Methods
Schochet, Peter Z.; Chiang, Hanley – National Center for Education Evaluation and Regional Assistance, 2009
In randomized control trials (RCTs) in the education field, the complier average causal effect (CACE) parameter is often of policy interest, because it pertains to intervention effects for students who receive a meaningful dose of treatment services. This report uses a causal inference and instrumental variables framework to examine the…
Descriptors: Educational Research, Causal Models, Regression (Statistics), Educational Policy
Schochet, Peter Z. – National Center for Education Evaluation and Regional Assistance, 2008
This report examines theoretical and empirical issues related to the statistical power of impact estimates under clustered regression discontinuity (RD) designs. The theory is grounded in the causal inference and HLM modeling literature, and the empirical work focuses on commonly-used designs in education research to test intervention effects on…
Descriptors: Research Methodology, Models, Regression (Statistics), Sample Size
Schochet, Peter Z. – National Center for Education Evaluation and Regional Assistance, 2008
Pretest-posttest experimental designs are often used in randomized control trials (RCTs) in the education field to improve the precision of the estimated treatment effects. For logistic reasons, however, pretest data are often collected after random assignment, so that including them in the analysis could bias the posttest impact estimates. Thus,…
Descriptors: Pretests Posttests, Pretesting, Scores, Intervention
Schochet, Peter Z. – National Center for Education Evaluation and Regional Assistance, 2008
This report presents guidelines for addressing the multiple comparisons problem in impact evaluations in the education area. The problem occurs due to the large number of hypothesis tests that are typically conducted across outcomes and subgroups in these studies, which can lead to spurious statistically significant impact findings. The…
Descriptors: Guidelines, Testing, Hypothesis Testing, Statistical Significance

Peer reviewed
