ERIC Number: ED268179
Record Type: RIE
Publication Date: 1985-Oct
Reference Count: 0
Evaluating Clinical Training: Measurement and Utilization Implications from Three National Studies.
Norcross, John C.; Stevenson, John F.
This paper draws on three national studies of professional psychology programs to identify measurement and utilization implications for the evaluation of clinical training. The focus is on the evaluation of the training program rather than the evaluation of students' performance/competence. The directors of 315 programs--74 psychology training clinics, 62 clinical psychology doctoral programs, and 179 predoctoral internships--responded to mailed questionnaires. The questionnaires were designed to: (1) assess the current use of informal and formal training evaluation procedures; (2) gauge the relative impact of these procedures in judging the quality of training; and (3) delineate the major obstacle in conducting this type of evaluation. Aggregate results indicated that evaluation practices heavily favor student-focused, impressionistically-collected, and qualitatively-oriented sources of evidence to judge the training enterprise. Program directors rated written evaluations, formal accreditation reports, trainees' evaluations, and numerical ratings of program effectiveness the most useful measures in gauging the quality of training. Rigor, resistance and resources consistently emerged as the most intractable problems to meaningful evaluation. Findings also indicated considerable convergence in training evaluation designs, measures and obstacles across settings. These marked similarities suggest that site variation is quite limited. Several implications for training evaluation are presented. (Author/PN)
Publication Type: Speeches/Meeting Papers; Reports - Research
Education Level: N/A
Authoring Institution: N/A
Note: Paper presented at the Annual Meeting of the Evaluation Research Society (Toronto, Ontario, Canada, October 17-19, 1985).