NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 106 results
Peer reviewed Peer reviewed
Direct linkDirect link
Bridgeman, Brent – Journal of Educational Measurement, 2012
In an article in the Winter 2011 issue of the "Journal of Educational Measurement", van der Linden, Jeon, and Ferrara suggested that "test takers should trust their initial instincts and retain their initial responses when they have the opportunity to review test items." They presented a complex IRT model that appeared to show that students would…
Descriptors: Item Response Theory, Test Wiseness, Multiple Choice Tests, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Suh, Youngsuk; Bolt, Daniel M. – Journal of Educational Measurement, 2011
In multiple-choice items, differential item functioning (DIF) in the correct response may or may not be caused by differentially functioning distractors. Identifying distractors as causes of DIF can provide valuable information for potential item revision or the design of new test items. In this paper, we examine a two-step approach based on…
Descriptors: Test Items, Test Bias, Multiple Choice Tests, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Zu, Jiyun; Liu, Jinghua – Journal of Educational Measurement, 2010
Equating of tests composed of both discrete and passage-based multiple choice items using the nonequivalent groups with anchor test design is popular in practice. In this study, we compared the effect of discrete and passage-based anchor items on observed score equating via simulation. Results suggested that an anchor with a larger proportion of…
Descriptors: Equated Scores, Test Items, Multiple Choice Tests, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Sooyeon; Walker, Michael E.; McHale, Frederick – Journal of Educational Measurement, 2010
In this study we examined variations of the nonequivalent groups equating design for tests containing both multiple-choice (MC) and constructed-response (CR) items to determine which design was most effective in producing equivalent scores across the two tests to be equated. Using data from a large-scale exam, this study investigated the use of…
Descriptors: Measures (Individuals), Scoring, Equated Scores, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Yao, Lihua; Boughton, Keith – Journal of Educational Measurement, 2009
Numerous assessments contain a mixture of multiple choice (MC) and constructed response (CR) item types and many have been found to measure more than one trait. Thus, there is a need for multidimensional dichotomous and polytomous item response theory (IRT) modeling solutions, including multidimensional linking software. For example,…
Descriptors: Multiple Choice Tests, Responses, Test Items, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Seonghoon; Lee, Won-Chan – Journal of Educational Measurement, 2006
Under item response theory (IRT), linking proficiency scales from separate calibrations of multiple forms of a test to achieve a common scale is required in many applications. Four IRT linking methods including the mean/mean, mean/sigma, Haebara, and Stocking-Lord methods have been presented for use with single-format tests. This study extends the…
Descriptors: Simulation, Item Response Theory, Test Format, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Jee-Seon – Journal of Educational Measurement, 2006
Simulation and real data studies are used to investigate the value of modeling multiple-choice distractors on item response theory linking. Using the characteristic curve linking procedure for Bock's (1972) nominal response model presented by Kim and Hanson (2002), all-category linking (i.e., a linking based on all category characteristic curves…
Descriptors: Multiple Choice Tests, Test Items, Item Response Theory, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Gorin, Joanna S. – Journal of Educational Measurement, 2005
Based on a previously validated cognitive processing model of reading comprehension, this study experimentally examines potential generative components of text-based multiple-choice reading comprehension test questions. Previous research (Embretson & Wetzel, 1987; Gorin & Embretson, 2005; Sheehan & Ginther, 2001) shows text encoding and decision…
Descriptors: Reaction Time, Reading Comprehension, Difficulty Level, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Yang, Wen-Ling – Journal of Educational Measurement, 2004
This application study investigates whether the multiple-choice to composite linking functions that determine Advanced Placement Program exam grades remain invariant over subgroups defined by region. Three years of test data from an AP exam are used to study invariance across regions. The study focuses on two questions: (a) How invariant are grade…
Descriptors: Advanced Placement, Advanced Placement Programs, Multiple Choice Tests, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Dorans, Neil J. – Journal of Educational Measurement, 2004
Score equity assessment (SEA) is introduced, and placed within a fair assessment context that includes differential prediction or fair selection and differential item functioning. The notion of subpopulation invariance of linking functions is central to the assessment of score equity, just as it has been for differential item functioning and…
Descriptors: Prediction, Scores, Calculus, Advanced Placement
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J.; Sotaridona, Leonardo – Journal of Educational Measurement, 2004
A statistical test for the detection of answer copying on multiple-choice tests is presented. The test is based on the idea that the answers of examinees to test items may be the result of three possible processes: (1) knowing, (2) guessing, and (3) copying, but that examinees who do not have access to the answers of other examinees can arrive at…
Descriptors: Multiple Choice Tests, Test Items, Hypothesis Testing, Statistical Distributions
Peer reviewed Peer reviewed
Jodoin, Michael G. – Journal of Educational Measurement, 2003
Analyzed examinee responses to conventional (multiple-choice) and innovative item formats in a computer-based testing program for item response theory (IRT) information with the three parameter and graded response models. Results for more than 3,000 adult examines for 2 tests show that the innovative item types in this study provided more…
Descriptors: Ability, Adults, Computer Assisted Testing, Item Response Theory
Peer reviewed Peer reviewed
Sotaridona, Leonardo S.; Meijer, Rob R. – Journal of Educational Measurement, 2003
Proposed two new indices to detect answer copying on a multiple choice test and conducted a simulation study to investigate the usefulness of both indexes. Discusses conditions under which the proposed indexes can be useful. (SLD)
Descriptors: Cheating, Multiple Choice Tests, Simulation, Testing Problems
Peer reviewed Peer reviewed
Bielinski, John; Davison, Mark L. – Journal of Educational Measurement, 2001
Used mathematics achievement data from the 1992 National Assessment of Educational Progress, the Third International Mathematics and Science Study, and the National Education Longitudinal Study of 1988 to examine the sex difference by item difficulty interaction. The predicted negative correlation was found for all eight populations and was…
Descriptors: Correlation, Difficulty Level, Interaction, Mathematics Tests
Peer reviewed Peer reviewed
Katz, Irvin R.; Bennett, Randy Elliot; Berger, Aliza E. – Journal of Educational Measurement, 2000
Studied the solution strategies of 55 high school students who solved parallel constructed response and multiple-choice items that differed only in the presence of response options. Differences in difficulty between response formats did not correspond to differences in strategy choice. Interprets results in light of the relative comprehension…
Descriptors: College Entrance Examinations, Constructed Response, Difficulty Level, High School Students
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8