NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 41 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Pattyn, Valérie; Molenveld, Astrid; Befani, Barbara – American Journal of Evaluation, 2019
Qualitative comparative analysis (QCA) is gaining ground in evaluation circles, but the number of applications is still limited. In this article, we consider the challenges that can emerge during a QCA evaluation by drawing on our experience of conducting one in the field of development cooperation. For each stage of the evaluation process, we…
Descriptors: Qualitative Research, Comparative Analysis, Evaluation Methods, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Finucane, Mariel McKenzie; Martinez, Ignacio; Cody, Scott – American Journal of Evaluation, 2018
In the coming years, public programs will capture even more and richer data than they do now, including data from web-based tools used by participants in employment services, from tablet-based educational curricula, and from electronic health records for Medicaid beneficiaries. Program evaluators seeking to take full advantage of these data…
Descriptors: Bayesian Statistics, Data Analysis, Program Evaluation, Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Jacobson, Miriam R.; Whyte, Cristina E.; Azzam, Tarek – American Journal of Evaluation, 2018
Evaluators can work with brief units of text-based data, such as open-ended survey responses, text messages, and social media postings. Online crowdsourcing is a promising method for quantifying large amounts of text-based data by engaging hundreds of people to categorize the data. To further develop and test this method, individuals were…
Descriptors: Mixed Methods Research, Evaluation Methods, Comparative Analysis, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Groth Andersson, Signe; Denvall, Verner – American Journal of Evaluation, 2017
In recent years, performance management (PM) has become a buzzword in public sector organizations. Well-functioning PM systems rely on valid performance data, but critics point out that conflicting rationale or logic among professional staff in recording information can undermine the quality of the data. Based on a case study of social service…
Descriptors: Performance, Social Services, Case Studies, Data Collection
Peer reviewed Peer reviewed
Direct linkDirect link
Koleros, Andrew; Jupp, Dee; Kirwan, Sean; Pradhan, Meeta S.; Pradhan, Pushkar K.; Seddon, David; Tumbahangfe, Ansu – American Journal of Evaluation, 2016
This article presents discussion and recommendations on approaches to retrospectively evaluating development interventions in the long term through a systems lens. It is based on experiences from the implementation of an 18-month study to investigate the impact of development interventions on economic and social change over a 40-year period in the…
Descriptors: Foreign Countries, Case Studies, Systems Development, International Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Maxwell, Nan L.; Rotz, Dana; Garcia, Christina – American Journal of Evaluation, 2016
This study examines the perceptions of data-driven decision making (DDDM) activities and culture in organizations driven by a social mission. Analysis of survey information from multiple stakeholders in each of eight social enterprises highlights the wide divergence in views of DDDM. Within an organization, managerial and nonmanagerial staff…
Descriptors: Data, Decision Making, Organizational Climate, Organizational Culture
Peer reviewed Peer reviewed
Direct linkDirect link
Granger, Robert C.; Maynard, Rebecca – American Journal of Evaluation, 2015
Despite bipartisan support in Washington, DC, which dates back to the mid-1990s, the "what works" approach has yet to gain broad support among policymakers and practitioners. One way to build such support is to increase the usefulness of program impact evaluations for these groups. We describe three ways to make impact evaluations more…
Descriptors: Outcome Measures, Program Evaluation, Evaluation Utilization, Policy
Peer reviewed Peer reviewed
Direct linkDirect link
Klerman, Jacob Alex; Olsho, Lauren E. W.; Bartlett, Susan – American Journal of Evaluation, 2015
While regression discontinuity has usually been applied retrospectively to secondary data, it is even more attractive when applied prospectively. In a prospective design, data collection can be focused on cases near the discontinuity, thereby improving internal validity and substantially increasing precision. Furthermore, such prospective…
Descriptors: Regression (Statistics), Evaluation Methods, Evaluation Problems, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Hawk, Mary – American Journal of Evaluation, 2015
Randomized controlled trials are the gold standard in research but may not fully explain or predict outcome variations in community-based interventions. Demonstrating efficacy of externally driven programs in well-controlled environments may not translate to community-based implementation where resources and priorities vary. A bottom-up evaluation…
Descriptors: African Americans, Females, Acquired Immunodeficiency Syndrome (AIDS), Risk Management
Peer reviewed Peer reviewed
Direct linkDirect link
Hall, Jori N.; Freeman, Melissa – American Journal of Evaluation, 2014
Shadowing is a data collection method that involves following a person, as they carry out those everyday activities relevant to a research study. This article explores the use of shadowing in a formative evaluation of a professional development school (PDS). Specifically, this article discusses how shadowing was used to understand the role of a…
Descriptors: Formative Evaluation, Capacity Building, Professional Development Schools, Data Collection
Peer reviewed Peer reviewed
Direct linkDirect link
Brandon, Paul R.; Fukunaga, Landry L. – American Journal of Evaluation, 2014
Evaluators widely agree that stakeholder involvement is a central aspect of effective program evaluation. With the exception of articles on collaborative evaluation approaches, however, a systematic review of the breadth and depth of the literature on stakeholder involvement has not been published. In this study, we examine peer-reviewed empirical…
Descriptors: Stakeholders, Research, Data Collection, Observation
Peer reviewed Peer reviewed
Direct linkDirect link
Durand, Roger; Decker, Phillip J.; Kirkman, Dorothy M. – American Journal of Evaluation, 2014
Despite our best efforts as evaluators, program implementation failures abound. A wide variety of valuable methodologies have been adopted to explain and evaluate the "why" of these failures. Yet, typically these methodologies have been employed concurrently (e.g., project monitoring) or to the post-hoc assessment of program activities.…
Descriptors: Evaluation Methods, Program Implementation, Failure, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Gee, Kevin A. – American Journal of Evaluation, 2014
The growth in the availability of longitudinal data--data collected over time on the same individuals--as part of program evaluations has opened up exciting possibilities for evaluators to ask more nuanced questions about how individuals' outcomes change over time. However, in order to leverage longitudinal data to glean these important insights,…
Descriptors: Longitudinal Studies, Data Analysis, Statistical Studies, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Campbell, Rebecca; Greeson, Megan R.; Fehler-Cabral, Giannina – American Journal of Evaluation, 2014
This article describes the process by which we created a recruitment protocol for engaging adolescent sexual assault victims in a qualitative evaluation study. Working in collaboration with forensic nurses, rape victim advocates, adolescent rape survivors, and our institutional review board (IRB), we created a prospective recruitment method…
Descriptors: Sexual Abuse, Recruitment, Trauma, Adolescents
Peer reviewed Peer reviewed
Direct linkDirect link
Wharton, Tracy; Alexander, Neil – American Journal of Evaluation, 2013
This article describes lessons learned about implementing evaluations in hospital settings. In order to overcome the methodological dilemmas inherent in this environment, we used a practical participatory evaluation (P-PE) strategy to engage as many stakeholders as possible in the process of evaluating a clinical demonstration project.…
Descriptors: Hospitals, Demonstration Programs, Program Evaluation, Evaluation Methods
Previous Page | Next Page »
Pages: 1  |  2  |  3