NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing all 10 results
Peer reviewed Peer reviewed
Direct linkDirect link
Hall, Jori N.; Freeman, Melissa – American Journal of Evaluation, 2014
Shadowing is a data collection method that involves following a person, as they carry out those everyday activities relevant to a research study. This article explores the use of shadowing in a formative evaluation of a professional development school (PDS). Specifically, this article discusses how shadowing was used to understand the role of a…
Descriptors: Formative Evaluation, Capacity Building, Professional Development Schools, Data Collection
Peer reviewed Peer reviewed
Direct linkDirect link
Henry, Gary T.; Smith, Adrienne A.; Kershaw, David C.; Zulli, Rebecca A. – American Journal of Evaluation, 2013
Performance-based accountability along with budget tightening has increased pressure on publicly funded organizations to develop and deliver programs that produce meaningful social benefits. As a result, there is increasing need to undertake formative evaluations that estimate preliminary program outcomes and identify promising program components…
Descriptors: Formative Evaluation, Program Evaluation, Program Effectiveness, Longitudinal Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Brandon, Paul R.; Harrison, George M.; Lawton, Brian E. – American Journal of Evaluation, 2013
When evaluators plan site-randomized experiments, they must conduct the appropriate statistical power analyses. These analyses are most likely to be valid when they are based on data from the jurisdictions in which the studies are to be conducted. In this method note, we provide software code, in the form of a SAS macro, for producing statistical…
Descriptors: Statistical Analysis, Correlation, Effect Size, Benchmarking
Peer reviewed Peer reviewed
Direct linkDirect link
Zvoch, Keith – American Journal of Evaluation, 2012
Multilevel modeling techniques facilitated examination of relationships between fidelity indicators and outcomes associated with a summer literacy intervention. Three-level growth models were specified to capture the extent to which students experienced instruction and to demonstrate the ways in which dosage-response relationships manifest in…
Descriptors: Literacy, Summer Programs, College School Cooperation, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Century, Jeanne; Rudnick, Mollie; Freeman, Cassie – American Journal of Evaluation, 2010
There is a growing recognition of the value of measuring fidelity of implementation (FOI) as a necessary part of evaluating interventions. However, evaluators do not have a shared conceptual understanding of what FOI is and how to measure it. Thus, the creation of FOI measures is typically a secondary focus and based on specific contexts and…
Descriptors: Intervention, Program Implementation, Measurement Techniques, Evaluators
Peer reviewed Peer reviewed
Direct linkDirect link
Smith, Nick L.; Brandon, Paul R.; Lawton, Brian E.; Krohn-Ching, Val – American Journal of Evaluation, 2010
This is the first examination of exemplary evaluation under a new editorial approach, in which the authors are attempting not only to report how the evaluation was conducted and to explain the rationale for design and implementation but also to examine the conditions, events, or actions that might have contributed to its exemplary status. This…
Descriptors: Elementary School Students, Reading Comprehension, Program Evaluation, Grants
Peer reviewed Peer reviewed
Direct linkDirect link
Newton, Xiaoxia A.; Llosa, Lorena – American Journal of Evaluation, 2010
Most K-12 evaluations are designed to make inferences about how a program implemented at the classroom or school level affects student learning outcomes and such inferences inherently involve hierarchical data structure. One methodological challenge for evaluators is linking program implementation factors typically measured at the classroom or…
Descriptors: Program Evaluation, Reading Programs, Reading Achievement, Program Implementation
Peer reviewed Peer reviewed
Direct linkDirect link
Bisset, Sherri; Daniel, Mark; Potvin, Louise – American Journal of Evaluation, 2009
It has been acknowledged for several decades that programs interact with context. The nature of this interactivity, and how it defines a program, has not been adequately addressed. We view this lacuna as a function of the dominant theoretical perspectives guiding knowledge of program operations. We propose the actor-network theory (ANT) and its…
Descriptors: Intervention, Translation, Health Personnel, Nutrition
Peer reviewed Peer reviewed
Direct linkDirect link
Barela, Eric – American Journal of Evaluation, 2008
This article describes an educational program evaluation conducted by Dr. Eric Barela. Barela is a senior educational research analyst with the research and planning division of the Los Angeles Unified School District (LAUSD). He joined the division as an urban education research fellow and has been an internal evaluator with LAUSD for almost six…
Descriptors: Elementary Schools, Educational Research, Program Evaluation, Urban Education
Peer reviewed Peer reviewed
Direct linkDirect link
Ehren, Melanie C. M.; Leeuw, Frans L.; Scheerens, Jaap – American Journal of Evaluation, 2005
This article uses a policy scientific approach to reconstruct assumptions underlying the Dutch Educational Supervision Act. We show an example of how to reconstruct and evaluate a program theory that is based on legislation of inspection. The assumptions explain how inspection leads to school improvement. Evaluation of these assumptions is used to…
Descriptors: Program Effectiveness, Elementary Education, Supervision, Educational Change