Descriptor
| Test Reliability | 4 |
| Latent Trait Theory | 3 |
| Multiple Choice Tests | 3 |
| Academic Ability | 2 |
| Models | 2 |
| Scoring Formulas | 2 |
| Statistical Analysis | 2 |
| Statistical Studies | 2 |
| Test Construction | 2 |
| Test Items | 2 |
| More ▼ | |
Source
| Journal of Educational… | 8 |
Author
| Lord, Frederic M. | 8 |
Publication Type
| Journal Articles | 2 |
| Reports - Research | 2 |
Education Level
Audience
| Researchers | 1 |
Showing all 8 results
Peer reviewedLord, Frederic M. – Journal of Educational Measurement, 1974
When comparing two tests that measure the same trait, separate comparisons should be made at different levels of the trait. A simple, practical, approximate formula is given for doing this. The adequacy of the approximation is illustrated using data comparing seven nationally known sixth-grade reading tests. (Author/RC)
Descriptors: Ability Identification, Comparative Analysis, Reading Tests, Statistical Analysis
Peer reviewedLord, Frederic M. – Journal of Educational Measurement, 1977
A variety of practical applications of item characteristic curve test theory are discussed. Among these applications are tailored testing, two stage testing, determining whether two tests measure the same latent trait, and measuring item bias towards minority or other groups. (Author/JKS)
Descriptors: Computer Programs, Latent Trait Theory, Mastery Tests, Measurement
Peer reviewedLord, Frederic M. – Journal of Educational Measurement, 1977
Two approaches for determining the optimal number of choices for a test item, presently in the literature, are compared with two new approaches. (Author)
Descriptors: Forced Choice Technique, Latent Trait Theory, Multiple Choice Tests, Test Items
Peer reviewedLord, Frederic M. – Journal of Educational Measurement, 1975
The assumption that examinees either know the answer to a test item or else guess at random is usually totally implausible. A different assumption is outlined, under which formula scoring is found to be clearly superior to number right scoring. (Author)
Descriptors: Guessing (Tests), Multiple Choice Tests, Response Style (Tests), Scoring
Peer reviewedLord, Frederic M. – Journal of Educational Measurement, 1974
Descriptors: Statistical Analysis, Test Reliability, Transformations (Mathematics)
Peer reviewedLord, Frederic M. – Journal of Educational Measurement, 1986
Advantages and disadvantages of joint maximum likelihood, marginal maximum likelihood, and Bayesian methods of parameter estimation in item response theory are discussed and compared. (Author)
Descriptors: Bayesian Statistics, Error Patterns, Estimation (Mathematics), Higher Education
Peer reviewedLord, Frederic M. – Journal of Educational Measurement, 1984
Four methods are outlined for estimating or approximating from a single test administration the standard error of measurement of number-right test score at specified ability levels or cutting scores. The methods are illustrated and compared on one set of real test data. (Author)
Descriptors: Academic Ability, Cutting Scores, Error of Measurement, Scoring Formulas
Peer reviewedLord, Frederic M. – Journal of Educational Measurement, 1971
Modifications of administration and item arrangement of a conventional test can force a match between item difficulty levels and the ability level of the examinee. Although different examinees take different sets of items, the scoring method provides comparable scores for all. Furthermore, the test is self-scoring. These advantages are obtained…
Descriptors: Academic Ability, Difficulty Level, Measurement Techniques, Models


