NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 679 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wing, Coady; Bello-Gomez, Ricardo A. – American Journal of Evaluation, 2018
Treatment effect estimates from a "regression discontinuity design" (RDD) have high internal validity. However, the arguments that support the design apply to a subpopulation that is narrower and usually different from the population of substantive interest in evaluation research. The disconnect between RDD population and the…
Descriptors: Regression (Statistics), Research Design, Validity, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Dahler-Larsen, Peter – American Journal of Evaluation, 2018
As theory-based evaluation (TBE) engages in situations where multiple stakeholders help develop complex program theory about dynamic phenomena in politically contested settings, it becomes difficult to develop and use program theory without ambiguity. The purpose of this article is to explore ambiguity as a fruitful perspective that helps TBE face…
Descriptors: Ambiguity (Context), Concept Formation, Educational Theories, Organizational Theories
Peer reviewed Peer reviewed
Direct linkDirect link
Finucane, Mariel McKenzie; Martinez, Ignacio; Cody, Scott – American Journal of Evaluation, 2018
In the coming years, public programs will capture even more and richer data than they do now, including data from web-based tools used by participants in employment services, from tablet-based educational curricula, and from electronic health records for Medicaid beneficiaries. Program evaluators seeking to take full advantage of these data…
Descriptors: Bayesian Statistics, Data Analysis, Program Evaluation, Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Goodman, Lisa A.; Epstein, Deborah; Sullivan, Cris M. – American Journal of Evaluation, 2018
Programs for domestic violence (DV) victims and their families have grown exponentially over the last four decades. The evidence demonstrating the extent of their effectiveness, however, often has been criticized as stemming from studies lacking scientific rigor. A core reason for this critique is the widespread belief that credible evidence can…
Descriptors: Randomized Controlled Trials, Program Evaluation, Program Effectiveness, Family Violence
Peer reviewed Peer reviewed
Direct linkDirect link
Ledford, Jennifer R. – American Journal of Evaluation, 2018
Randomization of large number of participants to different treatment groups is often not a feasible or preferable way to answer questions of immediate interest to professional practice. Single case designs (SCDs) are a class of research designs that are experimental in nature but require only a few participants, all of whom receive the…
Descriptors: Research Design, Randomized Controlled Trials, Experimental Groups, Control Groups
Peer reviewed Peer reviewed
Direct linkDirect link
Whitesell, Nancy Rumbaugh; Sarche, Michelle; Keane, Ellen; Mousseau, Alicia C.; Kaufman, Carol E. – American Journal of Evaluation, 2018
Evidence-based interventions hold promise for reducing gaps in health equity across diverse populations, but evidence about effectiveness within these populations lags behind the mainstream, often leaving opportunities to fulfill this promise unrealized. Mismatch between standard intervention outcomes research methods and the cultural and…
Descriptors: Scientific Methodology, Cultural Context, Health Promotion, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Zandniapour, Lily; Deterding, Nicole M. – American Journal of Evaluation, 2018
Tiered evidence initiatives are an important federal strategy to incentivize and accelerate the use of rigorous evidence in planning, implementing, and assessing social service investments. The Social Innovation Fund (SIF), a program of the Corporation for National and Community Service, adopted a public-private partnership approach to tiered…
Descriptors: Program Effectiveness, Program Evaluation, Research Needs, Evidence
Peer reviewed Peer reviewed
Direct linkDirect link
Grob, George F. – American Journal of Evaluation, 2018
This article is the fifth in a series on the relationship of evaluation theory and practice. It was commissioned by the Eastern Evaluation Research Society and the American Evaluation Association as part of The Chelimsky Forum, launched in 2013 to honor Eleanor Chelimsky, one of the most influential and respected program evaluators of our era, and…
Descriptors: Professional Development, Ethics, Evaluation Methods, Theory Practice Relationship
Peer reviewed Peer reviewed
Direct linkDirect link
Mark, Melvin M. – American Journal of Evaluation, 2018
George Grob presented the fifth and final Eleanor Chelimsky Forum address at the 2017 annual meeting of the Eastern Evaluation Research Society. In this commentary, I respond to several points that George raises in the "American Journal of Evaluation" paper based on that address. An overarching theme of my comments involves the potential…
Descriptors: Theory Practice Relationship, Evaluation Research, Research Needs, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Moulton, Shawn R.; Peck, Laura R.; Greeney, Adam – American Journal of Evaluation, 2018
In experimental evaluations of health and social programs, the role of dosage is rarely explored because researchers cannot usually randomize individuals to experience varying dosage levels. Instead, such evaluations reveal the average effects of exposure to an intervention, although program exposure may vary widely. This article compares three…
Descriptors: Marriage, Intervention, Prediction, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Gates, Emily F. – American Journal of Evaluation, 2018
Evaluation is defined by its central task of valuing--the process and product of judging the merit, worth, or significance of a policy or program. However, there are no clear-cut ways to consider values and render value judgments in evaluation practice. There remains contention in the evaluation field about whether and how to make value judgments.…
Descriptors: Heuristics, Value Judgment, Values, Systems Approach
Peer reviewed Peer reviewed
Direct linkDirect link
Hicks, Tyler; Rodríguez-Campos, Liliana; Choi, Jeong Hoon – American Journal of Evaluation, 2018
To begin statistical analysis, Bayesians quantify their confidence in modeling hypotheses with priors. A prior describes the probability of a certain modeling hypothesis apart from the data. Bayesians should be able to defend their choice of prior to a skeptical audience. Collaboration between evaluators and stakeholders could make their choices…
Descriptors: Bayesian Statistics, Evaluation Methods, Statistical Analysis, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Mark, Melvin M.; Caracelli, Valerie; McNall, Miles A.; Miller, Robin Lin – American Journal of Evaluation, 2018
Since 2003, the Oral History Project Team has conducted interviews with individuals who have made particularly noteworthy contributions to the theory and practice of evaluation. In 2013, Mel Mark, Valerie Caracelli, and Miles McNall sat with Thomas Cook in Washington, D.C., during the American Evaluation Association (AEA) annual conference. The…
Descriptors: Biographies, Oral History, College Faculty, Faculty Development
Peer reviewed Peer reviewed
Direct linkDirect link
Franz, Berkeley A.; Skinner, Daniel; Murphy, John W. – American Journal of Evaluation, 2018
This article examines the theoretical basis of the community as it is evoked in health evaluation. In particular, we examine how hospitals carrying out Community Health Needs Assessments (CHNAs) define communities as well as the implications for these definitions for how to study and engage community problems. We present qualitative findings from…
Descriptors: Hospitals, Public Health, Needs Assessment, Definitions
Peer reviewed Peer reviewed
Direct linkDirect link
Arensman, Bodille; van Waegeningh, Cornelie; van Wessel, Margit – American Journal of Evaluation, 2018
Theory of change (ToC) is currently "the" approach for the evaluation and planning of international development programs. This approach is considered especially suitable for complex interventions. We question this assumption and argue that ToC's focus on cause-effect logic and intended outcomes does not do justice to the recursive nature…
Descriptors: Theories, Change, Program Evaluation, Advocacy
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  46