NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing all 6 results
Peer reviewed Peer reviewed
Subkoviak, Michael J. – Journal of Educational Measurement, 1988
Current methods for obtaining reliability indices for mastery tests can be laborious. This paper offers practitioners tables from which agreement and kappa coefficients can be read directly and provides criterion for acceptable values of agreement and kappa coefficients. (TJH)
Descriptors: Mastery Tests, Statistical Analysis, Test Reliability, Testing
Peer reviewed Peer reviewed
Harnisch, Delwyn L. – Journal of Educational Measurement, 1983
The Student-Problem (S-P) methodology is described using an example of 24 students on a test of 44 items. Information based on the students' test score and the modified caution index is put to diagnostic use. A modification of the S-P methodology is applied to domain-referenced testing. (Author/CM)
Descriptors: Academic Achievement, Educational Practices, Item Analysis, Responses
Peer reviewed Peer reviewed
Cardinet, Jean; And Others – Journal of Educational Measurement, 1981
Since fixed and random facets may exist in objects of study as well as in conditions of observation, various modifications of the generalizability theory estimation formulas are required for different types of measurement designs. Various design modifications are proposed to improve reliability by reducing error variance. (Author/BW)
Descriptors: Analysis of Variance, Reliability, Research Design, Statistical Analysis
Peer reviewed Peer reviewed
Marascuilo, Leonard A.; Slaughter, Robert E. – Journal of Educational Measurement, 1981
Six statistical methods for identifying possible sources of bias in standardized test items are presented. The relationship between chi-squared methods and item-response theory methods are also discussed. (Author/BW)
Descriptors: Comparative Analysis, Latent Trait Theory, Mathematical Models, Standardized Tests
Peer reviewed Peer reviewed
Berk, Ronald A. – Journal of Educational Measurement, 1980
A dozen different approaches that yield 13 reliability indices for criterion-referenced tests were identified and grouped into three categories: threshold loss function, squared-error loss function, and domain score estimation. Indices were evaluated within each category. (Author/RL)
Descriptors: Classification, Criterion Referenced Tests, Cutting Scores, Evaluation Methods
Peer reviewed Peer reviewed
Livingston, Samuel A.; Wingersky, Marilyn A. – Journal of Educational Measurement, 1979
Procedures are described for studying the reliability of decisions based on specific passing scores with tests made up of discrete items and designed to measure continuous rather than categorical traits. These procedures are based on the estimation of the joint distribution of true scores and observed scores. (CTM)
Descriptors: Cutting Scores, Decision Making, Efficiency, Error of Measurement