NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 34 results
Peer reviewed Peer reviewed
Direct linkDirect link
Mueller, Christoph Emanuel; Gaus, Hansjoerg – American Journal of Evaluation, 2015
In this article, we test an alternative approach to creating a counterfactual basis for estimating individual and average treatment effects. Instead of using control/comparison groups or before-measures, the so-called Counterfactual as Self-Estimated by Program Participants (CSEPP) relies on program participants' self-estimations of their own…
Descriptors: Intervention, Research Design, Research Methodology, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Dong, Nianbo – American Journal of Evaluation, 2015
Researchers have become increasingly interested in programs' main and interaction effects of two variables (A and B, e.g., two treatment variables or one treatment variable and one moderator) on outcomes. A challenge for estimating main and interaction effects is to eliminate selection bias across A-by-B groups. I introduce Rubin's…
Descriptors: Probability, Statistical Analysis, Research Design, Causal Models
Peer reviewed Peer reviewed
Direct linkDirect link
St.Clair, Travis; Cook, Thomas D.; Hallberg, Kelly – American Journal of Evaluation, 2014
Although evaluators often use an interrupted time series (ITS) design to test hypotheses about program effects, there are few empirical tests of the design's validity. We take a randomized experiment on an educational topic and compare its effects to those from a comparative ITS (CITS) design that uses the same treatment group as the…
Descriptors: Time, Evaluation Methods, Measurement Techniques, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Mueller, Christoph Emanuel; Gaus, Hansjoerg; Rech, Joerg – American Journal of Evaluation, 2014
This article proposes an innovative approach to estimating the counterfactual without the necessity of generating information from either a control group or a before-measure. Building on the idea that program participants are capable of estimating the hypothetical state they would be in had they not participated, the basics of the Roy-Rubin model…
Descriptors: Research Design, Program Evaluation, Research Methodology, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Campbell, Rebecca; Greeson, Megan R.; Fehler-Cabral, Giannina – American Journal of Evaluation, 2014
This article describes the process by which we created a recruitment protocol for engaging adolescent sexual assault victims in a qualitative evaluation study. Working in collaboration with forensic nurses, rape victim advocates, adolescent rape survivors, and our institutional review board (IRB), we created a prospective recruitment method…
Descriptors: Sexual Abuse, Recruitment, Trauma, Adolescents
Peer reviewed Peer reviewed
Direct linkDirect link
Holvoet, Nathalie; Dewachter, Sara – American Journal of Evaluation, 2013
National Evaluation Societies (NES) are situated at the intersection between Monitoring and Evaluation (M&E) supply and demand. To date, little research has explored NES and their potential for strengthening national M&E. This study addresses this gap, examining perceived NES performance relevant to organizational and policy-oriented goals…
Descriptors: Foreign Countries, Developing Nations, National Organizations, Professional Associations
Peer reviewed Peer reviewed
Direct linkDirect link
Henry, Gary T.; Smith, Adrienne A.; Kershaw, David C.; Zulli, Rebecca A. – American Journal of Evaluation, 2013
Performance-based accountability along with budget tightening has increased pressure on publicly funded organizations to develop and deliver programs that produce meaningful social benefits. As a result, there is increasing need to undertake formative evaluations that estimate preliminary program outcomes and identify promising program components…
Descriptors: Formative Evaluation, Program Evaluation, Program Effectiveness, Longitudinal Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Hansen, Henrik; Klejnstrup, Ninja Ritter; Andersen, Ole Winckler – American Journal of Evaluation, 2013
There is a long-standing debate as to whether nonexperimental estimators of causal effects of social programs can overcome selection bias. Most existing reviews either are inconclusive or point to significant selection biases in nonexperimental studies. However, many of the reviews, the so-called "between-studies," do not make direct…
Descriptors: Foreign Countries, Developing Nations, Outcome Measures, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Azzam, Tarek; Jacobson, Miriam R. – American Journal of Evaluation, 2013
This article explores the viability of online crowdsourcing for creating matched-comparison groups. This exploratory study compares survey results from a randomized control group to survey results from a matched-comparison group created from Amazon.com's MTurk crowdsourcing service to determine their comparability. Study findings indicate…
Descriptors: Matched Groups, Control Groups, Comparative Analysis, Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Ledermann, Simone – American Journal of Evaluation, 2012
Research has identified a wide range of factors that affect evaluation use but continues to be inconclusive as to their relative importance. This article addresses the complex phenomenon of evaluation use in three ways: first, it draws on recent conceptual developments to delimitate the examined form of use; second, it aims at identifying…
Descriptors: Comparative Analysis, Evaluation, Foreign Countries, Interviews
Peer reviewed Peer reviewed
Direct linkDirect link
Scheirer, Mary Ann; Mark, Melvin M.; Brooks, Ariana; Grob, George F.; Chapel, Thomas J.; Geisz, Mary; McKaughan, Molly; Leviton, Laura – American Journal of Evaluation, 2012
Linking evaluation methods to the several phases of a program's life cycle can provide evaluation planners and funders with guidance about what types of evaluation are most appropriate over the trajectory of social and educational programs and other interventions. If methods are matched to the needs of program phases, evaluation can and should…
Descriptors: Evidence, Evaluation Methods, Program Development, Life Cycle Costing
Peer reviewed Peer reviewed
Direct linkDirect link
Goritz, Anja S.; Crutzen, Rik – American Journal of Evaluation, 2012
Evidence-based insight on the effectiveness of reminders in web-based data collection in online panels is scarce. Thirty-eight studies were conducted in three different online panels to examine the effect of reminders on response rates, retention rates, and two facets of response quality (i.e., item omissions and response nondifferentiation). The…
Descriptors: Data Collection, Computer Literacy, Internet, Web Based Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Sager, Fritz; Andereggen, Celine – American Journal of Evaluation, 2012
In this article, the authors state two arguments: first, that the four categories of context, politics, polity, and policy make an adequate framework for systematic review being both exhaustive and parsimonious; second, that the method of qualitative comparative analysis (QCA) is an appropriate methodical approach for gaining realistic results…
Descriptors: Comparative Analysis, Qualitative Research, Synthesis, Politics
Peer reviewed Peer reviewed
Direct linkDirect link
McDavid, James C.; Huse, Irene – American Journal of Evaluation, 2012
A key assumption in efforts to implement and improve cross-government public reporting systems is that legislators will make use of the performance information to enhance accountability and improve program and policy effectiveness. This five-year study is an assessment of expectations and actual uses of annual performance reports by elected…
Descriptors: Legislators, Reports, Performance, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Jackson, Suzanne F.; Kolla, Gillian – American Journal of Evaluation, 2012
In attempting to use a realistic evaluation approach to explore the role of Community Parents in early parenting programs in Toronto, a novel technique was developed to analyze the links between contexts (C), mechanisms (M) and outcomes (O) directly from experienced practitioner interviews. Rather than coding the interviews into themes in terms of…
Descriptors: Foreign Countries, Semi Structured Interviews, Qualitative Research, Comparative Analysis
Previous Page | Next Page ยป
Pages: 1  |  2  |  3