NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 76 to 90 of 1,152 results
Peer reviewed Peer reviewed
Direct linkDirect link
Bridgeman, Brent – Journal of Educational Measurement, 2012
In an article in the Winter 2011 issue of the "Journal of Educational Measurement", van der Linden, Jeon, and Ferrara suggested that "test takers should trust their initial instincts and retain their initial responses when they have the opportunity to review test items." They presented a complex IRT model that appeared to show that students would…
Descriptors: Item Response Theory, Test Wiseness, Multiple Choice Tests, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Oh, Hyeonjoo; Moses, Tim – Journal of Educational Measurement, 2012
This study investigated differences between two approaches to chained equipercentile (CE) equating (one- and bi-direction CE equating) in nearly equal groups and relatively unequal groups. In one-direction CE equating, the new form is linked to the anchor in one sample of examinees and the anchor is linked to the reference form in the other…
Descriptors: Equated Scores, Statistical Analysis, Comparative Analysis, Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Feiming; Cohen, Allan; Shen, Linjun – Journal of Educational Measurement, 2012
Computer-based tests (CBTs) often use random ordering of items in order to minimize item exposure and reduce the potential for answer copying. Little research has been done, however, to examine item position effects for these tests. In this study, different versions of a Rasch model and different response time models were examined and applied to…
Descriptors: Computer Assisted Testing, Test Items, Item Response Theory, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Shang, Yi – Journal of Educational Measurement, 2012
Growth models are used extensively in the context of educational accountability to evaluate student-, class-, and school-level growth. However, when error-prone test scores are used as independent variables or right-hand-side controls, the estimation of such growth models can be substantially biased. This article introduces a…
Descriptors: Error of Measurement, Statistical Analysis, Regression (Statistics), Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Jin, Kuan-Yu; Qiu, Xue-Lan; Wang, Lei – Journal of Educational Measurement, 2012
In some tests, examinees are required to choose a fixed number of items from a set of given items to answer. This practice creates a challenge to standard item response models, because more capable examinees may have an advantage by making wiser choices. In this study, we developed a new class of item response models to account for the choice…
Descriptors: Item Response Theory, Test Items, Selection, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Moses, Tim – Journal of Educational Measurement, 2012
The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…
Descriptors: Error of Measurement, Prediction, Regression (Statistics), True Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Raymond, Mark R.; Swygert, Kimberly A.; Kahraman, Nilufer – Journal of Educational Measurement, 2012
Although a few studies report sizable score gains for examinees who repeat performance-based assessments, research has not yet addressed the reliability and validity of inferences based on ratings of repeat examinees on such tests. This study analyzed scores for 8,457 single-take examinees and 4,030 repeat examinees who completed a 6-hour clinical…
Descriptors: Physicians, Licensing Examinations (Professions), Performance Based Assessment, Repetition
Peer reviewed Peer reviewed
Direct linkDirect link
Kane, Michael – Journal of Educational Measurement, 2011
Errors don't exist in our data, but they serve a vital function. Reality is complicated, but our models need to be simple in order to be manageable. We assume that attributes are invariant over some conditions of observation, and once we do that we need some way of accounting for the variability in observed scores over these conditions of…
Descriptors: Error of Measurement, Scores, Test Interpretation, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Baldwin, Peter – Journal of Educational Measurement, 2011
Growing interest in fully Bayesian item response models begs the question: To what extent can model parameter posterior draws enhance existing practices? One practice that has traditionally relied on model parameter point estimates but may be improved by using posterior draws is the development of a common metric for two independently calibrated…
Descriptors: Item Response Theory, Bayesian Statistics, Computation, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Petscher, Yaacov; Schatschneider, Christopher – Journal of Educational Measurement, 2011
Research by Huck and McLean (1975) demonstrated that the covariance-adjusted score is more powerful than the simple difference score, yet recent reviews indicate researchers are equally likely to use either score type in two-wave randomized experimental designs. A Monte Carlo simulation was conducted to examine the conditions under which the…
Descriptors: Scores, Sample Size, Pretests Posttests, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J. – Journal of Educational Measurement, 2011
A critical component of test speededness is the distribution of the test taker's total time on the test. A simple set of constraints on the item parameters in the lognormal model for response times is derived that can be used to control the distribution when assembling a new test form. As the constraints are linear in the item parameters, they can…
Descriptors: Test Format, Reaction Time, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Sinharay, Sandip; Haberman, Shelby J.; Lee, Yi-Hsuan – Journal of Educational Measurement, 2011
Providing information to test takers and test score users about the abilities of test takers at different score levels has been a persistent problem in educational and psychological measurement. Scale anchoring, a technique which describes what students at different points on a score scale know and can do, is a tool to provide such information.…
Descriptors: Scores, Test Items, Statistical Analysis, Licensing Examinations (Professions)
Peer reviewed Peer reviewed
Direct linkDirect link
Zhu, Xiaowen; Stone, Clement A. – Journal of Educational Measurement, 2011
The posterior predictive model checking method is a flexible Bayesian model-checking tool and has recently been used to assess fit of dichotomous IRT models. This paper extended previous research to polytomous IRT models. A simulation study was conducted to explore the performance of posterior predictive model checking in evaluating different…
Descriptors: Item Response Theory, Bayesian Statistics, Models, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Puhan, Gautam – Journal of Educational Measurement, 2011
The impact of log-linear presmoothing on the accuracy of small sample chained equipercentile equating was evaluated under two conditions. In the first condition the small samples differed randomly in ability from the target population. In the second condition the small samples were systematically different from the target population. Results…
Descriptors: Equated Scores, Data Analysis, Sample Size, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
DeCarlo, Lawrence T.; Kim, YoungKoung; Johnson, Matthew S. – Journal of Educational Measurement, 2011
The hierarchical rater model (HRM) recognizes the hierarchical structure of data that arises when raters score constructed response items. In this approach, raters' scores are not viewed as being direct indicators of examinee proficiency but rather as indicators of essay quality; the (latent categorical) quality of an examinee's essay in turn…
Descriptors: Responses, Essay Tests, Models, Scores
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  77