Descriptor
| Statistical Analysis | 4 |
| Test Reliability | 3 |
| Cutting Scores | 2 |
| Mastery Tests | 2 |
| Reliability | 2 |
| Test Format | 2 |
| Academic Achievement | 1 |
| Analysis of Variance | 1 |
| Classification | 1 |
| Comparative Analysis | 1 |
| More ▼ | |
Source
| Journal of Educational… | 6 |
Author
| Berk, Ronald A. | 1 |
| Cardinet, Jean | 1 |
| Harnisch, Delwyn L. | 1 |
| Livingston, Samuel A. | 1 |
| Marascuilo, Leonard A. | 1 |
| Slaughter, Robert E. | 1 |
| Subkoviak, Michael J. | 1 |
| Wingersky, Marilyn A. | 1 |
Publication Type
| Guides - Non-Classroom | 6 |
| Journal Articles | 6 |
| Reports - Research | 2 |
| Information Analyses | 1 |
| Numerical/Quantitative Data | 1 |
| Reports - Descriptive | 1 |
| Tests/Questionnaires | 1 |
Education Level
Audience
| Practitioners | 1 |
Showing all 6 results
Peer reviewedSubkoviak, Michael J. – Journal of Educational Measurement, 1988
Current methods for obtaining reliability indices for mastery tests can be laborious. This paper offers practitioners tables from which agreement and kappa coefficients can be read directly and provides criterion for acceptable values of agreement and kappa coefficients. (TJH)
Descriptors: Mastery Tests, Statistical Analysis, Test Reliability, Testing
Peer reviewedHarnisch, Delwyn L. – Journal of Educational Measurement, 1983
The Student-Problem (S-P) methodology is described using an example of 24 students on a test of 44 items. Information based on the students' test score and the modified caution index is put to diagnostic use. A modification of the S-P methodology is applied to domain-referenced testing. (Author/CM)
Descriptors: Academic Achievement, Educational Practices, Item Analysis, Responses
Peer reviewedCardinet, Jean; And Others – Journal of Educational Measurement, 1981
Since fixed and random facets may exist in objects of study as well as in conditions of observation, various modifications of the generalizability theory estimation formulas are required for different types of measurement designs. Various design modifications are proposed to improve reliability by reducing error variance. (Author/BW)
Descriptors: Analysis of Variance, Reliability, Research Design, Statistical Analysis
Peer reviewedMarascuilo, Leonard A.; Slaughter, Robert E. – Journal of Educational Measurement, 1981
Six statistical methods for identifying possible sources of bias in standardized test items are presented. The relationship between chi-squared methods and item-response theory methods are also discussed. (Author/BW)
Descriptors: Comparative Analysis, Latent Trait Theory, Mathematical Models, Standardized Tests
Peer reviewedBerk, Ronald A. – Journal of Educational Measurement, 1980
A dozen different approaches that yield 13 reliability indices for criterion-referenced tests were identified and grouped into three categories: threshold loss function, squared-error loss function, and domain score estimation. Indices were evaluated within each category. (Author/RL)
Descriptors: Classification, Criterion Referenced Tests, Cutting Scores, Evaluation Methods
Peer reviewedLivingston, Samuel A.; Wingersky, Marilyn A. – Journal of Educational Measurement, 1979
Procedures are described for studying the reliability of decisions based on specific passing scores with tests made up of discrete items and designed to measure continuous rather than categorical traits. These procedures are based on the estimation of the joint distribution of true scores and observed scores. (CTM)
Descriptors: Cutting Scores, Decision Making, Efficiency, Error of Measurement


