NotesFAQContact Us
Collection
Advanced
Search Tips
ERIC Number: ED348384
Record Type: RIE
Publication Date: 1992-Apr
Pages: 32
Abstractor: N/A
Reference Count: N/A
ISBN: N/A
ISSN: N/A
Assessing the Reliability of Computer Adaptive Testing Branching Algorithms Using HyperCAT.
Shermis, Mark D.; And Others
The reliability of four branching algorithms commonly used in computer adaptive testing (CAT) was examined. These algorithms were: (1) maximum likelihood (MLE); (2) Bayesian; (3) modal Bayesian; and (4) crossover. Sixty-eight undergraduate college students were randomly assigned to one of the four conditions using the HyperCard-based CAT program, HyperCAT. As a way to control for order effects, half of the students were randomly assigned to take the paper-and-pencil test first, followed 3 weeks later by the CAT, while the other half took the CAT first. Investigative analyses showed no initial group differences by algorithm for the paper-and-pencil test and for CAT-estimated ability. In addition, there was no order effect. The internal consistency coefficient for the paper-and-pencil test was 0.73. The marginal reliability for the CAT was 0.97. Correlations between the paper-and-pencil scores and theta estimates of ability ranged from 0.48 to 0.79. Reliability was highest for the MLE algorithm, followed by the Bayesian, modal Bayesian, and crossover algorithms, respectively. Given the constraints of MLE branching algorithms (e.g., the examinee must get at least one item correct and one item incorrect), and the alleged biasedness associated with Bayesian branching strategies, the results suggest that the modal Bayesian testing may provide an acceptable alternative. Six tables present study data. Three figures and 14 references are included. (Author/SLD)
Publication Type: Reports - Research; Speeches/Meeting Papers
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Identifiers: Ability Estimates; Branching Algorithms; HyperCAT Computer Program; Paper and Pencil Tests
Note: Paper presented at the Annual Meeting of the National Council on Measurement in Education (San Francisco, CA, April 20-24, 1992).