Publication Date
| In 2015 | 6 |
| Since 2014 | 30 |
| Since 2011 (last 5 years) | 105 |
| Since 2006 (last 10 years) | 204 |
| Since 1996 (last 20 years) | 377 |
Descriptor
| Test Items | 266 |
| Test Construction | 176 |
| Item Response Theory | 173 |
| Test Reliability | 156 |
| Scores | 149 |
| Test Validity | 147 |
| Higher Education | 135 |
| Comparative Analysis | 132 |
| Statistical Analysis | 116 |
| Models | 113 |
| More ▼ | |
Author
| Linn, Robert L. | 16 |
| Wainer, Howard | 16 |
| van der Linden, Wim J. | 15 |
| Dorans, Neil J. | 14 |
| Kolen, Michael J. | 14 |
| Bridgeman, Brent | 12 |
| Hambleton, Ronald K. | 12 |
| Livingston, Samuel A. | 12 |
| Sinharay, Sandip | 12 |
| Clauser, Brian E. | 10 |
| More ▼ | |
Publication Type
Education Level
| Elementary Secondary Education | 7 |
| Higher Education | 7 |
| High Schools | 6 |
| Secondary Education | 6 |
| Middle Schools | 4 |
| Postsecondary Education | 4 |
| Grade 8 | 3 |
| Elementary Education | 2 |
| Grade 10 | 1 |
| Grade 4 | 1 |
| More ▼ | |
Audience
| Researchers | 21 |
| Practitioners | 4 |
| Teachers | 1 |
Showing 16 to 30 of 1,152 results
Häggström, Jenny; Wiberg, Marie – Journal of Educational Measurement, 2014
The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…
Descriptors: Equated Scores, Data Analysis, Comparative Analysis, Simulation
Sinharay, Sandip – Journal of Educational Measurement, 2014
Brennan noted that users of test scores often want (indeed, demand) that subscores be reported, along with total test scores, for diagnostic purposes. Haberman suggested a method based on classical test theory (CTT) to determine if subscores have added value over the total score. One way to interpret the method is that a subscore has added value…
Descriptors: Scores, Test Theory, Classification, Cutting Scores
Jin, Kuan-Yu; Wang, Wen-Chung – Journal of Educational Measurement, 2014
Sometimes, test-takers may not be able to attempt all items to the best of their ability (with full effort) due to personal factors (e.g., low motivation) or testing conditions (e.g., time limit), resulting in poor performances on certain items, especially those located toward the end of a test. Standard item response theory (IRT) models fail to…
Descriptors: Student Evaluation, Item Response Theory, Models, Simulation
Wang, Wen-Chung; Su, Chi-Ming; Qiu, Xue-Lan – Journal of Educational Measurement, 2014
Ratings given to the same item response may have a stronger correlation than those given to different item responses, especially when raters interact with one another before giving ratings. The rater bundle model was developed to account for such local dependence by forming multiple ratings given to an item response as a bundle and assigning…
Descriptors: Item Response Theory, Interrater Reliability, Models, Correlation
Tendeiro, Jorge N.; Meijer, Rob R. – Journal of Educational Measurement, 2014
In recent guidelines for fair educational testing it is advised to check the validity of individual test scores through the use of person-fit statistics. For practitioners it is unclear on the basis of the existing literature which statistic to use. An overview of relatively simple existing nonparametric approaches to identify atypical response…
Descriptors: Educational Assessment, Test Validity, Scores, Statistical Analysis
Zu, Jiyun; Puhan, Gautam – Journal of Educational Measurement, 2014
Preequating is in demand because it reduces score reporting time. In this article, we evaluated an observed-score preequating method: the empirical item characteristic curve (EICC) method, which makes preequating without item response theory (IRT) possible. EICC preequating results were compared with a criterion equating and with IRT true-score…
Descriptors: Item Response Theory, Equated Scores, Item Analysis, Item Sampling
Lathrop, Quinn N.; Cheng, Ying – Journal of Educational Measurement, 2014
When cut scores for classifications occur on the total score scale, popular methods for estimating classification accuracy (CA) and classification consistency (CC) require assumptions about a parametric form of the test scores or about a parametric response model, such as item response theory (IRT). This article develops an approach to estimate CA…
Descriptors: Cutting Scores, Classification, Computation, Nonparametric Statistics
Guo, Hongwen; Puhan, Gautam – Journal of Educational Measurement, 2014
In this article, we introduce a section preequating (SPE) method (linear and nonlinear) under the randomly equivalent groups design. In this equating design, sections of Test X (a future new form) and another existing Test Y (an old form already on scale) are administered. The sections of Test X are equated to Test Y, after adjusting for the…
Descriptors: Equated Scores, Correlation, Simulation, Testing
Andersson, Björn; von Davier, Alina A. – Journal of Educational Measurement, 2014
We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…
Descriptors: Internet, Information Transfer, Synchronous Communication, Error of Measurement
Naumann, Alexander; Hochweber, Jan; Hartig, Johannes – Journal of Educational Measurement, 2014
Students' performance in assessments is commonly attributed to more or less effective teaching. This implies that students' responses are significantly affected by instruction. However, the assumption that outcome measures indeed are instructionally sensitive is scarcely investigated empirically. In the present study, we propose a…
Descriptors: Test Bias, Longitudinal Studies, Hierarchical Linear Modeling, Test Items
Sinharay, Sandip; Wan, Ping; Whitaker, Mike; Kim, Dong-In; Zhang, Litong; Choi, Seung W. – Journal of Educational Measurement, 2014
With an increase in the number of online tests, interruptions during testing due to unexpected technical issues seem unavoidable. For example, interruptions occurred during several recent state tests. When interruptions occur, it is important to determine the extent of their impact on the examinees' scores. There is a lack of research on this…
Descriptors: Computer Assisted Testing, Testing Problems, Scores, Regression (Statistics)
van der Palm, Daniël W.; van der Ark, L. Andries; Sijtsma, Klaas – Journal of Educational Measurement, 2014
The latent class reliability coefficient (LCRC) is improved by using the divisive latent class model instead of the unrestricted latent class model. This results in the divisive latent class reliability coefficient (DLCRC), which unlike LCRC avoids making subjective decisions about the best solution and thus avoids judgment error. A computational…
Descriptors: Test Reliability, Scores, Computation, Simulation
Schroeders, Ulrich; Robitzsch, Alexander; Schipolowski, Stefan – Journal of Educational Measurement, 2014
C-tests are a specific variant of cloze tests that are considered time-efficient, valid indicators of general language proficiency. They are commonly analyzed with models of item response theory assuming local item independence. In this article we estimated local interdependencies for 12 C-tests and compared the changes in item difficulties,…
Descriptors: Comparative Analysis, Psychometrics, Cloze Procedure, Language Tests
Wang, Chun; Zheng, Chanjin; Chang, Hua-Hua – Journal of Educational Measurement, 2014
Computerized adaptive testing offers the possibility of gaining information on both the overall ability and cognitive profile in a single assessment administration. Some algorithms aiming for these dual purposes have been proposed, including the shadow test approach, the dual information method (DIM), and the constraint weighted method. The…
Descriptors: Item Response Theory, Adaptive Testing, Computer Assisted Testing, Cognitive Ability
Li, Zhushan – Journal of Educational Measurement, 2014
Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…
Descriptors: Test Bias, Sample Size, Statistical Analysis, Regression (Statistics)

Peer reviewed
Direct link
