NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Leroux, Audrey J.; Dodd, Barbara G. – Journal of Experimental Education, 2016
The current study compares the progressive-restricted standard error (PR-SE) exposure control method with the Sympson-Hetter, randomesque, and no exposure control (maximum information) procedures using the generalized partial credit model with fixed- and variable-length CATs and two item pools. The PR-SE method administered the entire item pool…
Descriptors: Computer Assisted Testing, Adaptive Testing, Comparative Analysis, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, HwaYoung; Dodd, Barbara G. – Educational and Psychological Measurement, 2012
This study investigated item exposure control procedures under various combinations of item pool characteristics and ability distributions in computerized adaptive testing based on the partial credit model. Three variables were manipulated: item pool characteristics (120 items for each of easy, medium, and hard item pools), two ability…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Ho, Tsung-Han; Dodd, Barbara G. – Applied Measurement in Education, 2012
In this study we compared five item selection procedures using three ability estimation methods in the context of a mixed-format adaptive test based on the generalized partial credit model. The item selection procedures used were maximum posterior weighted information, maximum expected information, maximum posterior weighted Kullback-Leibler…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Peer reviewed Peer reviewed
Cook, Karon F.; Dodd, Barbara G.; Fitzpatrick, Steven J. – Journal of Outcome Measurement, 1999
The partial-credit model, the generalized partial-credit model, and the graded-response model were compared in the context of testlet scoring using Scholastic Assessment Tests results (n=2,548) and a simulated data set. Results favor the partial-credit model in this context; considerations for model selection in other contexts are discussed. (SLD)
Descriptors: College Entrance Examinations, Comparative Analysis, High School Students, Item Response Theory
Peer reviewed Peer reviewed
Choi, Seung W.; Cook, Karon F.; Dodd, Barbara G. – Journal of Outcome Measurement, 1997
The accuracy with which estimation can recover item and person parameters is of interest to those who use item response theory, as do many educational researchers. This study investigated parameter recovery for the partial credit model using the MULTILOG computer program. Ways to improve accuracy of estimation are suggested. (SLD)
Descriptors: Educational Research, Estimation (Mathematics), Item Response Theory
Peer reviewed Peer reviewed
Dodd, Barbara G.; And Others – Educational and Psychological Measurement, 1993
Effects of the following variables on performance of computerized adaptive testing (CAT) procedures for the partial credit model (PCM) were studied: (1) stopping rule for terminating CAT; (2) item pool size; and (3) distribution of item difficulties. Implications of findings for CAT systems based on the PCM are discussed. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Simulation, Difficulty Level
Peer reviewed Peer reviewed
Chen, Ssu-Kuang; Hou, Liling; Dodd, Barbara G. – Educational and Psychological Measurement, 1998
A simulation study was conducted to investigate the application of expected a posteriori (EAP) trait estimation in computerized adaptive tests (CAT) based on the partial credit model and compare it with maximum likelihood estimation (MLE). Results show the conditions under which EAP and MLE provide relatively accurate estimation in CAT. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Estimation (Mathematics)
Peer reviewed Peer reviewed
Pastor, Dena A.; Dodd, Barbara G.; Chang, Hua-Hua – Applied Psychological Measurement, 2002
Studied the impact of using five different exposure control algorithms in two sizes of item pool calibrated using the generalized partial credit model. Simulation results show that the a-stratified design, in comparison to a no-exposure control condition, could be used to reduce item exposure and overlap and increase pool use, while degrading…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Item Banks
Peer reviewed Peer reviewed
Davis, Laurie Laughlin; Pastor, Dena A.; Dodd, Barbara G.; Chiang, Claire; Fitzpatrick, Steven J. – Journal of Applied Measurement, 2003
Examined the effectiveness of the Sympson-Hetter technique and rotated content balancing relative to no exposure control and no content rotation conditions in a computerized adaptive testing system based on the partial credit model. Simulation results show the Sympson-Hetter technique can be used with minimal impact on measurement precision,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Selection, Simulation
Boyd, Aimee M.; Dodd, Barbara G.; Fitzpatrick, Steven J. – 2003
This study compared several item exposure control procedures for computerized adaptive test (CAT) systems based on a three-parameter logistic testlet response theory model (X. Wang, E. Bradlow, and H. Wainer, 2002) and G. Masters' (1982) partial credit model using real data from the Verbal Reasoning section of the Medical College Admission Test.…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G. – Applied Psychological Measurement, 2012
When tests consist of multiple-choice and constructed-response items, researchers are confronted with the question of which item response theory (IRT) model combination will appropriately represent the data collected from these mixed-format tests. This simulation study examined the performance of six model selection criteria, including the…
Descriptors: Item Response Theory, Models, Selection, Criteria