Publication Date
| In 2015 | 6 |
| Since 2014 | 30 |
| Since 2011 (last 5 years) | 105 |
| Since 2006 (last 10 years) | 204 |
| Since 1996 (last 20 years) | 377 |
Descriptor
| Test Items | 257 |
| Item Response Theory | 173 |
| Scores | 133 |
| Test Construction | 120 |
| Higher Education | 109 |
| Comparative Analysis | 108 |
| Models | 93 |
| Simulation | 93 |
| Mathematical Models | 92 |
| Equated Scores | 88 |
| More ▼ | |
Source
| Journal of Educational… | 839 |
Author
| Wainer, Howard | 16 |
| van der Linden, Wim J. | 15 |
| Dorans, Neil J. | 14 |
| Kolen, Michael J. | 13 |
| Sinharay, Sandip | 12 |
| Bridgeman, Brent | 11 |
| Linn, Robert L. | 11 |
| Clauser, Brian E. | 10 |
| Holland, Paul W. | 10 |
| Bennett, Randy Elliot | 9 |
| More ▼ | |
Publication Type
Education Level
| Elementary Secondary Education | 7 |
| Higher Education | 7 |
| High Schools | 6 |
| Secondary Education | 6 |
| Middle Schools | 4 |
| Postsecondary Education | 4 |
| Grade 8 | 3 |
| Elementary Education | 2 |
| Grade 10 | 1 |
| Grade 4 | 1 |
| More ▼ | |
Audience
| Researchers | 21 |
| Practitioners | 3 |
| Teachers | 1 |
Showing 1 to 15 of 839 results
Cher Wong, Cheow – Journal of Educational Measurement, 2015
Building on previous works by Lord and Ogasawara for dichotomous items, this article proposes an approach to derive the asymptotic standard errors of item response theory true score equating involving polytomous items, for equivalent and nonequivalent groups of examinees. This analytical approach could be used in place of empirical methods like…
Descriptors: Item Response Theory, Error of Measurement, True Scores, Equated Scores
Sinharay, Sandip; Wan, Ping; Choi, Seung W.; Kim, Dong-In – Journal of Educational Measurement, 2015
With an increase in the number of online tests, the number of interruptions during testing due to unexpected technical issues seems to be on the rise. For example, interruptions occurred during several recent state tests. When interruptions occur, it is important to determine the extent of their impact on the examinees' scores. Researchers…
Descriptors: Computer Assisted Testing, Testing Problems, Scores, Statistical Analysis
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook – Journal of Educational Measurement, 2015
This inquiry is an investigation of item response theory (IRT) proficiency estimators' accuracy under multistage testing (MST). We chose a two-stage MST design that includes four modules (one at Stage 1, three at Stage 2) and three difficulty paths (low, middle, high). We assembled various two-stage MST panels (i.e., forms) by manipulating…
Descriptors: Comparative Analysis, Item Response Theory, Computation, Accuracy
Albano, Anthony D. – Journal of Educational Measurement, 2015
Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…
Descriptors: Equated Scores, Sample Size, Sampling, Statistical Inference
Assessment of Differential Item Functioning under Cognitive Diagnosis Models: The DINA Model Example
Li, Xiaomin; Wang, Wen-Chung – Journal of Educational Measurement, 2015
The assessment of differential item functioning (DIF) is routinely conducted to ensure test fairness and validity. Although many DIF assessment methods have been developed in the context of classical test theory and item response theory, they are not applicable for cognitive diagnosis models (CDMs), as the underlying latent attributes of CDMs are…
Descriptors: Test Bias, Models, Cognitive Measurement, Evaluation Methods
Meng, Xiang-Bin; Tao, Jian; Chang, Hua-Hua – Journal of Educational Measurement, 2015
The assumption of conditional independence between the responses and the response times (RTs) for a given person is common in RT modeling. However, when the speed of a test taker is not constant, this assumption will be violated. In this article we propose a conditional joint model for item responses and RTs, which incorporates a covariance…
Descriptors: Reaction Time, Test Items, Accuracy, Models
Liang, Tie; Wells, Craig S.; Hambleton, Ronald K. – Journal of Educational Measurement, 2014
As item response theory has been more widely applied, investigating the fit of a parametric model becomes an important part of the measurement process. There is a lack of promising solutions to the detection of model misfit in IRT. Douglas and Cohen introduced a general nonparametric approach, RISE (Root Integrated Squared Error), for detecting…
Descriptors: Item Response Theory, Measurement Techniques, Nonparametric Statistics, Models
Yao, Lihua – Journal of Educational Measurement, 2014
The intent of this research was to find an item selection procedure in the multidimensional computer adaptive testing (CAT) framework that yielded higher precision for both the domain and composite abilities, had a higher usage of the item pool, and controlled the exposure rate. Five multidimensional CAT item selection procedures (minimum angle;…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Hou, Likun; de la Torre, Jimmy; Nandakumar, Ratna – Journal of Educational Measurement, 2014
Analyzing examinees' responses using cognitive diagnostic models (CDMs) has the advantage of providing diagnostic information. To ensure the validity of the results from these models, differential item functioning (DIF) in CDMs needs to be investigated. In this article, the Wald test is proposed to examine DIF in the context of CDMs. This…
Descriptors: Test Bias, Models, Simulation, Error Patterns
Huang, Hung-Yu; Wang, Wen-Chung – Journal of Educational Measurement, 2014
The DINA (deterministic input, noisy, and gate) model has been widely used in cognitive diagnosis tests and in the process of test development. The outcomes known as slip and guess are included in the DINA model function representing the responses to the items. This study aimed to extend the DINA model by using the random-effect approach to allow…
Descriptors: Models, Guessing (Tests), Probability, Ability
Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A. – Journal of Educational Measurement, 2014
Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…
Descriptors: Measurement Techniques, Evaluation Methods, Item Response Theory, Equated Scores
Powers, Sonya; Kolen, Michael J. – Journal of Educational Measurement, 2014
Accurate equating results are essential when comparing examinee scores across exam forms. Previous research indicates that equating results may not be accurate when group differences are large. This study compared the equating results of frequency estimation, chained equipercentile, item response theory (IRT) true-score, and IRT observed-score…
Descriptors: Accuracy, Equated Scores, Differences, Groups
Clauser, Jerome C.; Margolis, Melissa J.; Clauser, Brian E. – Journal of Educational Measurement, 2014
Evidence of stable standard setting results over panels or occasions is an important part of the validity argument for an established cut score. Unfortunately, due to the high cost of convening multiple panels of content experts, standards often are based on the recommendation from a single panel of judges. This approach implicitly assumes that…
Descriptors: Standard Setting (Scoring), Generalizability Theory, Replication (Evaluation), Cutting Scores
Bolt, Daniel M.; Deng, Sien; Lee, Sora – Journal of Educational Measurement, 2014
Functional form misfit is frequently a concern in item response theory (IRT), although the practical implications of misfit are often difficult to evaluate. In this article, we illustrate how seemingly negligible amounts of functional form misfit, when systematic, can be associated with significant distortions of the score metric in vertical…
Descriptors: Item Response Theory, Scaling, Goodness of Fit, Models
Shu, Lianghua; Schwarz, Richard D. – Journal of Educational Measurement, 2014
As a global measure of precision, item response theory (IRT) estimated reliability is derived for four coefficients (Cronbach's a, Feldt-Raju, stratified a, and marginal reliability). Models with different underlying assumptions concerning test-part similarity are discussed. A detailed computational example is presented for the targeted…
Descriptors: Item Response Theory, Reliability, Models, Computation

Peer reviewed
Direct link
