Publication Date
| In 2015 | 2 |
| Since 2014 | 6 |
| Since 2011 (last 5 years) | 17 |
| Since 2006 (last 10 years) | 31 |
| Since 1996 (last 20 years) | 37 |
Descriptor
| Research Design | 37 |
| Program Evaluation | 21 |
| Evaluation Methods | 20 |
| Research Methodology | 19 |
| Evaluation Research | 8 |
| Evaluators | 8 |
| Intervention | 8 |
| Comparative Analysis | 7 |
| Program Effectiveness | 7 |
| Control Groups | 6 |
| More ▼ | |
Source
| American Journal of Evaluation | 37 |
Author
| Azzam, Tarek | 3 |
| Peck, Laura R. | 3 |
| Bell, Stephen H. | 2 |
| Cook, Thomas D. | 2 |
| Coryn, Chris L. S. | 2 |
| Gaus, Hansjoerg | 2 |
| Mertens, Donna M. | 2 |
| Morris, Michael | 2 |
| Mueller, Christoph Emanuel | 2 |
| Andersen, Ole Winckler | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 37 |
| Reports - Research | 15 |
| Reports - Evaluative | 11 |
| Reports - Descriptive | 6 |
| Information Analyses | 3 |
| Opinion Papers | 2 |
| Tests/Questionnaires | 1 |
Education Level
| Higher Education | 5 |
| Adult Education | 4 |
| Postsecondary Education | 3 |
| Elementary Secondary Education | 2 |
Audience
Showing 1 to 15 of 37 results
Mueller, Christoph Emanuel; Gaus, Hansjoerg – American Journal of Evaluation, 2015
In this article, we test an alternative approach to creating a counterfactual basis for estimating individual and average treatment effects. Instead of using control/comparison groups or before-measures, the so-called Counterfactual as Self-Estimated by Program Participants (CSEPP) relies on program participants' self-estimations of their own…
Descriptors: Intervention, Research Design, Research Methodology, Program Evaluation
Dong, Nianbo – American Journal of Evaluation, 2015
Researchers have become increasingly interested in programs' main and interaction effects of two variables (A and B, e.g., two treatment variables or one treatment variable and one moderator) on outcomes. A challenge for estimating main and interaction effects is to eliminate selection bias across A-by-B groups. I introduce Rubin's…
Descriptors: Probability, Statistical Analysis, Research Design, Causal Models
St.Clair, Travis; Cook, Thomas D.; Hallberg, Kelly – American Journal of Evaluation, 2014
Although evaluators often use an interrupted time series (ITS) design to test hypotheses about program effects, there are few empirical tests of the design's validity. We take a randomized experiment on an educational topic and compare its effects to those from a comparative ITS (CITS) design that uses the same treatment group as the…
Descriptors: Time, Evaluation Methods, Measurement Techniques, Research Design
Ryan, Katherine E.; Gandha, Tysza; Culbertson, Michael J.; Carlson, Crystal – American Journal of Evaluation, 2014
In evaluation and applied social research, focus groups may be used to gather different kinds of evidence (e.g., opinion, tacit knowledge). In this article, we argue that making focus group design choices explicitly in relation to the type of evidence required would enhance the empirical value and rigor associated with focus group utilization. We…
Descriptors: Focus Groups, Research Methodology, Research Design, Educational Research
Mueller, Christoph Emanuel; Gaus, Hansjoerg; Rech, Joerg – American Journal of Evaluation, 2014
This article proposes an innovative approach to estimating the counterfactual without the necessity of generating information from either a control group or a before-measure. Building on the idea that program participants are capable of estimating the hypothetical state they would be in had they not participated, the basics of the Roy-Rubin model…
Descriptors: Research Design, Program Evaluation, Research Methodology, Models
Le Menestrel, Suzanne M.; Walahoski, Jill S.; Mielke, Monica B. – American Journal of Evaluation, 2014
The 4-H youth development organization is a complex public--private partnership between the U.S. Department of Agriculture's National Institute of Food and Agriculture, the nation's Cooperative Extension system and National 4-H Council, a private, nonprofit partner. The current article is focused on a partnership approach to the…
Descriptors: Youth Programs, Evaluators, Cooperation, Evaluation Methods
Harvill, Eleanor L.; Peck, Laura R.; Bell, Stephen H. – American Journal of Evaluation, 2013
Using exogenous characteristics to identify endogenous subgroups, the approach discussed in this method note creates symmetric subsets within treatment and control groups, allowing the analysis to take advantage of an experimental design. In order to maintain treatment--control symmetry, however, prior work has posited that it is necessary to use…
Descriptors: Experimental Groups, Control Groups, Research Design, Sampling
Hansen, Henrik; Klejnstrup, Ninja Ritter; Andersen, Ole Winckler – American Journal of Evaluation, 2013
There is a long-standing debate as to whether nonexperimental estimators of causal effects of social programs can overcome selection bias. Most existing reviews either are inconclusive or point to significant selection biases in nonexperimental studies. However, many of the reviews, the so-called "between-studies," do not make direct…
Descriptors: Foreign Countries, Developing Nations, Outcome Measures, Comparative Analysis
Azzam, Tarek; Jacobson, Miriam R. – American Journal of Evaluation, 2013
This article explores the viability of online crowdsourcing for creating matched-comparison groups. This exploratory study compares survey results from a randomized control group to survey results from a matched-comparison group created from Amazon.com's MTurk crowdsourcing service to determine their comparability. Study findings indicate…
Descriptors: Matched Groups, Control Groups, Comparative Analysis, Evaluation
Bell, Stephen H.; Peck, Laura R. – American Journal of Evaluation, 2013
To answer "what works?" questions about policy interventions based on an experimental design, Peck (2003) proposes to use baseline characteristics to symmetrically divide treatment and control group members into subgroups defined by endogenously determined postrandom assignment events. Symmetric prediction of these subgroups in both…
Descriptors: Program Effectiveness, Experimental Groups, Control Groups, Program Evaluation
Braverman, Marc T. – American Journal of Evaluation, 2013
Sound evaluation planning requires numerous decisions about how constructs in a program theory will be translated into measures and instruments that produce evaluation data. This article, the first in a dialogue exchange, examines how decisions about measurement are (and should be) made, especially in the context of small-scale local program…
Descriptors: Evaluation Methods, Methods Research, Research Methodology, Research Design
Labin, Susan N.; Duffy, Jennifer L.; Meyers, Duncan C.; Wandersman, Abraham; Lesesne, Catherine A. – American Journal of Evaluation, 2012
The continuously growing demand for program results has produced an increased need for evaluation capacity building (ECB). The "Integrative ECB Model" was developed to integrate concepts from existing ECB theory literature and to structure a synthesis of the empirical ECB literature. The study used a broad-based research synthesis method with…
Descriptors: Synthesis, Literature Reviews, Data Analysis, Coding
Coryn, Chris L. S.; Noakes, Lindsay A.; Westine, Carl D.; Schroter, Daniela C. – American Journal of Evaluation, 2011
Although the general conceptual basis appeared far earlier, theory-driven evaluation came to prominence only a few decades ago with the appearance of Chen's 1990 book "Theory-Driven Evaluations." Since that time, the approach has attracted many supporters as well as detractors. In this paper, 45 cases of theory-driven evaluations, published over a…
Descriptors: Evidence, Program Evaluation, Educational Practices, Literature Reviews
Reichardt, Charles S. – American Journal of Evaluation, 2011
I define a treatment effect in terms of a comparison of outcomes and provide a typology of all possible comparisons that can be used to estimate treatment effects, including comparisons that are relatively unknown in both the literature and practice. I then assess the relative merit, worth, and value of all possible comparisons based on the…
Descriptors: Program Effectiveness, Evaluation Methods, Evaluation Criteria, Comparative Analysis
Azzam, Tarek – American Journal of Evaluation, 2011
This study addresses the central question "How do evaluators' background characteristics relate to their evaluation design choices?" Evaluators were provided with a fictitious description of a school-based program and asked to design an evaluation of that program. Relevant background characteristics such as level of experience, methodological…
Descriptors: Evaluators, Program Evaluation, Evaluation Utilization, Evaluation Methods

Peer reviewed
Direct link
