Publication Date
| In 2015 | 6 |
Descriptor
| Accuracy | 3 |
| Comparative Analysis | 3 |
| Computation | 3 |
| Item Response Theory | 3 |
| Statistical Analysis | 3 |
| Bayesian Statistics | 2 |
| Equated Scores | 2 |
| Markov Processes | 2 |
| Models | 2 |
| Monte Carlo Methods | 2 |
| More ▼ | |
Source
| Journal of Educational… | 6 |
Author
| Albano, Anthony D. | 1 |
| Chang, Hua-Hua | 1 |
| Cher Wong, Cheow | 1 |
| Choi, Seung W. | 1 |
| Kim, Dong-In | 1 |
| Kim, Sooyeon | 1 |
| Li, Xiaomin | 1 |
| Meng, Xiang-Bin | 1 |
| Moses, Tim | 1 |
| Sinharay, Sandip | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 6 |
| Reports - Research | 6 |
Education Level
Audience
Showing all 6 results
Cher Wong, Cheow – Journal of Educational Measurement, 2015
Building on previous works by Lord and Ogasawara for dichotomous items, this article proposes an approach to derive the asymptotic standard errors of item response theory true score equating involving polytomous items, for equivalent and nonequivalent groups of examinees. This analytical approach could be used in place of empirical methods like…
Descriptors: Item Response Theory, Error of Measurement, True Scores, Equated Scores
Sinharay, Sandip; Wan, Ping; Choi, Seung W.; Kim, Dong-In – Journal of Educational Measurement, 2015
With an increase in the number of online tests, the number of interruptions during testing due to unexpected technical issues seems to be on the rise. For example, interruptions occurred during several recent state tests. When interruptions occur, it is important to determine the extent of their impact on the examinees' scores. Researchers…
Descriptors: Computer Assisted Testing, Testing Problems, Scores, Statistical Analysis
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook – Journal of Educational Measurement, 2015
This inquiry is an investigation of item response theory (IRT) proficiency estimators' accuracy under multistage testing (MST). We chose a two-stage MST design that includes four modules (one at Stage 1, three at Stage 2) and three difficulty paths (low, middle, high). We assembled various two-stage MST panels (i.e., forms) by manipulating…
Descriptors: Comparative Analysis, Item Response Theory, Computation, Accuracy
Albano, Anthony D. – Journal of Educational Measurement, 2015
Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…
Descriptors: Equated Scores, Sample Size, Sampling, Statistical Inference
Assessment of Differential Item Functioning under Cognitive Diagnosis Models: The DINA Model Example
Li, Xiaomin; Wang, Wen-Chung – Journal of Educational Measurement, 2015
The assessment of differential item functioning (DIF) is routinely conducted to ensure test fairness and validity. Although many DIF assessment methods have been developed in the context of classical test theory and item response theory, they are not applicable for cognitive diagnosis models (CDMs), as the underlying latent attributes of CDMs are…
Descriptors: Test Bias, Models, Cognitive Measurement, Evaluation Methods
Meng, Xiang-Bin; Tao, Jian; Chang, Hua-Hua – Journal of Educational Measurement, 2015
The assumption of conditional independence between the responses and the response times (RTs) for a given person is common in RT modeling. However, when the speed of a test taker is not constant, this assumption will be violated. In this article we propose a conditional joint model for item responses and RTs, which incorporates a covariance…
Descriptors: Reaction Time, Test Items, Accuracy, Models

Peer reviewed
Direct link
