ERIC Number: ED366661
Record Type: RIE
Publication Date: 1993-Oct
Reference Count: N/A
Using National Surveys To Improve the Efficiency and Effectiveness of Broad-Based Program Evaluations.
Clark, Sheldon B.; Boser, Judith A.
The suggestions offered in this paper are based on the experiences of two scientific researchers of how evaluations undertaken in a competitive arena, in which true experimental designs are not viable, can be designed in such a way that meaningful comparative data can be examined. Case studies of the Science and Engineering Research Semester and the Laboratory Graduate Research Participation programs at the Oak Ridge Institute for Science and Education illustrate how one can make use of existing comparison groups when the establishment of control groups for each cohort in an educational study is not economically feasible. Data from national surveys are used for comparative purposes. These surveys include: (1) the Survey of Earned Doctorates of the National Science Foundation (NSF); (2) the Survey of Doctorate Recipients, another NSF survey; (3) the NSF New Entrants Survey; (4) the Survey of Experienced Scientists and Engineers, also an NSF component survey; and (5) the National Survey of College Graduates. Items from these surveys that are clearly relevant are selected for comparisons. In conclusion it is remarked that this approach is not a panacea, and requires thorough understanding and careful evaluation of the national survey, but it can be a useful and cost-effective alternative to traditional control-group designs. Ten figures illustrate survey use. (Contains nine references.) (SLD)
Publication Type: Reports - Evaluative; Speeches/Meeting Papers
Education Level: N/A
Authoring Institution: N/A
Identifiers: American Statistical Association; National Science Foundation; National Survey of College Graduates (NSF); New Entrants Survey; Survey of Doctorate Recipients; Survey of Earned Doctorates; Survey of Experienced Scientists and Engineers
Note: Paper presented at the Annual Meeting of the Southern Association for Public Opinion Research (Raleigh, NC, October 7-8, 1993). Based on a presentation made at the Annual Meeting of the American Evaluation Association (Chicago, IL, October 31-November 2, 1991).