NotesFAQContact Us
Search Tips
ERIC Number: ED317619
Record Type: Non-Journal
Publication Date: 1990-Apr
Pages: 15
Abstractor: N/A
Reference Count: N/A
Test-Retest Consistency of Computer Adaptive Tests.
Lunz, Mary E.; And Others
This study explores the test-retest consistency of computer adaptive tests of varying lengths. The testing model used was designed as a mastery model to determine whether an examinee's estimated ability level is above or below a pre-established criterion expressed in the metric (logits) of the calibrated item pool scale. The Rasch model was used to calibrate items and estimate person measures. The calibrated item pool contained 726 items. The PROX version of the maximum likelihood method of item selection was used in the adaptive algorithm. The content coverage in the computer adaptive testing was designed to be comparable to the test specifications for the conventional paper-and-pencil certification examination. A total of 765 students from across the nation participated as examinees in this study; a random sample of 162 of these students was placed in the test-retest condition. Examinees took two contiguous tests with the same test specifications, but different items (alternative forms of varying lengths). The ability measures from the test and retest were found to correlate at 0.95 when attenuated for error, demonstrating that differentiation among examinee measures is comparable regardless of the length of the test or the particular subset of items. This finding provides evidence of test-retest consistency of computer adaptive tests. Two data tables are included. (TJH)
Publication Type: Reports - Research; Speeches/Meeting Papers
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A