NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 106 to 120 of 1,152 results
Peer reviewed Peer reviewed
Direct linkDirect link
Zu, Jiyun; Liu, Jinghua – Journal of Educational Measurement, 2010
Equating of tests composed of both discrete and passage-based multiple choice items using the nonequivalent groups with anchor test design is popular in practice. In this study, we compared the effect of discrete and passage-based anchor items on observed score equating via simulation. Results suggested that an anchor with a larger proportion of…
Descriptors: Equated Scores, Test Items, Multiple Choice Tests, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Seo, Minhee; Roussos, Louis A. – Journal of Educational Measurement, 2010
DIMTEST is a widely used and studied method for testing the hypothesis of test unidimensionality as represented by local item independence. However, DIMTEST does not report the amount of multidimensionality that exists in data when rejecting its null. To provide more information regarding the degree to which data depart from unidimensionality, a…
Descriptors: Effect Size, Statistical Bias, Computation, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
Frederickx, Sofie; Tuerlinckx, Francis; De Boeck, Paul; Magis, David – Journal of Educational Measurement, 2010
In this paper we present a new methodology for detecting differential item functioning (DIF). We introduce a DIF model, called the random item mixture (RIM), that is based on a Rasch model with random item difficulties (besides the common random person abilities). In addition, a mixture model is assumed for the item difficulties such that the…
Descriptors: Test Bias, Models, Test Items, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Attali, Yigal – Journal of Educational Measurement, 2010
Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…
Descriptors: Generalizability Theory, Statistical Analysis, Reaction Time, Timed Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Chon, Kyong Hee; Lee, Won-Chan; Dunbar, Stephen B. – Journal of Educational Measurement, 2010
In this study we examined procedures for assessing model-data fit of item response theory (IRT) models for mixed format data. The model fit indices used in this study include PARSCALE's G[superscript 2], Orlando and Thissen's S-X[superscript 2] and S-G[superscript 2], and Stone's chi[superscript 2*] and G[superscript 2*]. To investigate the…
Descriptors: Test Length, Goodness of Fit, Item Response Theory, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Sinharay, Sandip; Holland, Paul W. – Journal of Educational Measurement, 2010
The nonequivalent groups with anchor test (NEAT) design involves missing data that are missing by design. Three equating methods that can be used with a NEAT design are the frequency estimation equipercentile equating method, the chain equipercentile equating method, and the item-response-theory observed-score-equating method. We suggest an…
Descriptors: Equated Scores, Item Response Theory, Comparative Analysis, Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Sooyeon; Livingston, Samuel A. – Journal of Educational Measurement, 2010
Score equating based on small samples of examinees is often inaccurate for the examinee populations. We conducted a series of resampling studies to investigate the accuracy of five methods of equating in a common-item design. The methods were chained equipercentile equating of smoothed distributions, chained linear equating, chained mean equating,…
Descriptors: Equated Scores, Test Items, Item Sampling, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Rijmen, Frank – Journal of Educational Measurement, 2010
Testlet effects can be taken into account by incorporating specific dimensions in addition to the general dimension into the item response theory model. Three such multidimensional models are described: the bi-factor model, the testlet model, and a second-order model. It is shown how the second-order model is formally equivalent to the testlet…
Descriptors: Computation, Item Response Theory, Models, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
French, Brian F.; Finch, W. Holmes – Journal of Educational Measurement, 2010
The purpose of this study was to examine the performance of differential item functioning (DIF) assessment in the presence of a multilevel structure that often underlies data from large-scale testing programs. Analyses were conducted using logistic regression (LR), a popular, flexible, and effective tool for DIF detection. Data were simulated…
Descriptors: Test Bias, Testing Programs, Evaluation, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Yao, Lihua – Journal of Educational Measurement, 2010
In educational assessment, overall scores obtained by simply averaging a number of domain scores are sometimes reported. However, simply averaging the domain scores ignores the fact that different domains have different score points, that scores from those domains are related, and that at different score points the relationship between overall…
Descriptors: Educational Assessment, Error of Measurement, Item Response Theory, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Robusto, Egidio; Stefanutti, Luca; Anselmi, Pasquale – Journal of Educational Measurement, 2010
Within the theoretical framework of knowledge space theory, a probabilistic skill multimap model for assessing learning processes is proposed. The learning process of a student is modeled as a function of the student's knowledge and of an educational intervention on the attainment of specific skills required to solve problems in a knowledge…
Descriptors: Intervention, Learning Processes, Probability, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Puhan, Gautam – Journal of Educational Measurement, 2010
In this study I compared results of chained linear, Tucker, and Levine-observed score equatings under conditions where the new and old forms samples were similar in ability and also when they were different in ability. The length of the anchor test was also varied to examine its effect on the three different equating methods. The three equating…
Descriptors: Testing, Equated Scores, Comparative Analysis, Causal Models
Peer reviewed Peer reviewed
Direct linkDirect link
de la Torre, Jimmy; Lee, Young-Sun – Journal of Educational Measurement, 2010
Cognitive diagnosis models (CDMs), as alternative approaches to unidimensional item response models, have received increasing attention in recent years. CDMs are developed for the purpose of identifying the mastery or nonmastery of multiple fine-grained attributes or skills required for solving problems in a domain. For CDMs to receive wider use,…
Descriptors: Ability Grouping, Item Response Theory, Models, Problem Solving
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Won-Chan – Journal of Educational Measurement, 2010
In this article, procedures are described for estimating single-administration classification consistency and accuracy indices for complex assessments using item response theory (IRT). This IRT approach was applied to real test data comprising dichotomous and polytomous items. Several different IRT model combinations were considered. Comparisons…
Descriptors: Classification, Item Response Theory, Comparative Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Sooyeon; Walker, Michael E.; McHale, Frederick – Journal of Educational Measurement, 2010
In this study we examined variations of the nonequivalent groups equating design for tests containing both multiple-choice (MC) and constructed-response (CR) items to determine which design was most effective in producing equivalent scores across the two tests to be equated. Using data from a large-scale exam, this study investigated the use of…
Descriptors: Measures (Individuals), Scoring, Equated Scores, Test Bias
Pages: 1  |  ...  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12  |  ...  |  77