Descriptor
| Evaluation Criteria | 2 |
| Evaluation Methods | 2 |
| Evaluation Utilization | 2 |
| Higher Education | 2 |
| Program Evaluation | 2 |
| Research Reports | 2 |
| Summative Evaluation | 2 |
| Technical Writing | 2 |
| Test Results | 2 |
| Audiences | 1 |
| More ▼ | |
Source
| Studies in Educational… | 7 |
Author
| Newfield, John | 1 |
| Plomp, Tjeerd | 1 |
| Scheerens, Jaap | 1 |
| Schott, Franz | 1 |
| Shepard, Lorrie | 1 |
| Smith, Denise M. | 1 |
| Smith, Nick L. | 1 |
| Stufflebeam, Daniel L. | 1 |
| Van Der Meer, Adri | 1 |
Publication Type
| Guides - Non-Classroom | 7 |
| Journal Articles | 7 |
| Reports - Descriptive | 1 |
Education Level
Audience
| Practitioners | 1 |
Showing all 7 results
Peer reviewedStufflebeam, Daniel L. – Studies in Educational Evaluation, 1985
Point of entry problems are faced by evaluators when asked to start an evaluation at an inappropriate time or to perform an inappropriate study. The fundamental purpose of evaluation is to help improve services. Guidelines are presented for choosing when to do a context, input, process, or product evaluation. (GDC)
Descriptors: Elementary Secondary Education, Evaluation Methods, Evaluation Utilization, Evaluators
Peer reviewedScheerens, Jaap – Studies in Educational Evaluation, 1985
In order to better understand the influence of the organizational setting on evaluation, this conceptual framework was developed and tried out in a meta evaluation of innovatory educational programs in Holland. Four components are explained--contigency factors, organization structure, policy-making, and evaluation research--and a checklist is…
Descriptors: Evaluation Criteria, Evaluation Utilization, Meta Analysis, Meta Evaluation
Peer reviewedSchott, Franz; And Others – Studies in Educational Evaluation, 1984
The relationship between instruction and a test is defined as a parallel content-valid relation. This article describes the PLANA procedure, which approaches the problem of content validity by applying constructional rules for producing or judging the content validity relationship between the instructional objective and the items. (BW)
Descriptors: Criterion Referenced Tests, Educational Objectives, Instructional Design, Teacher Education
Peer reviewedShepard, Lorrie – Studies in Educational Evaluation, 1980
The success of state assessment programs depends on how well the results are reported to their various audiences. This paper presents 11 guidelines for improving reporting practices, including: plan ahead; develop different reports for different audiences; and field test report formats, language, and content. (BW)
Descriptors: Audiences, Audiovisual Aids, Educational Assessment, Field Tests
Peer reviewedPlomp, Tjeerd; Van Der Meer, Adri – Studies in Educational Evaluation, 1980
The explicit statement of course objectives and a retrospective analysis of student test results are used in the evaluation of a mathematics course for engineering students. (BW)
Descriptors: College Mathematics, Course Evaluation, Course Objectives, Data Analysis
Peer reviewedNewfield, John – Studies in Educational Evaluation, 1980
When the basic unit being studied is a grouping of individuals, both students and items can be sampled. When this method is applied to the use of log sheets to measure fidelity of program implementation, the normal limitations of self-reporting are reduced. (BW)
Descriptors: Item Sampling, Measurement Techniques, Program Implementation, Questionnaires
Peer reviewedSmith, Denise M.; Smith, Nick L. – Studies in Educational Evaluation, 1981
Evaluation audiences can understand evaluation reports if the reports are written in common language, use proper organization, and incorporate information which provides the audience with a context for understanding the report. Suggestions about how this can be achieved are offered. (Author/RL)
Descriptors: Evaluation, Evaluation Criteria, Research Reports, Technical Writing


