ERIC Number: ED397061
Record Type: RIE
Publication Date: 1994-Apr
Reference Count: N/A
The Practical Impact of IRT Models and Parameters When Converting a Test to Adaptive Format.
Bizot, Elizabeth B.; Goldman, Steven H.
A study was conducted to evaluate the effects of choice of item response theory (IRT) model, parameter calibration group, starting ability estimate, and stopping criterion on the conversion of an 80-item vocabulary test to computer adaptive format. Three parameter calibration groups were tested: (1) a group of 1,000 high school seniors, (2) a group of 1,000 high school freshmen, and (3) 300 of this second group retested as seniors. Two methods for setting the initial ability estimate, a random-based estimate and an ability-based estimate, were explored using two-parameter-logistic, three-parameter logistic with "c" parameter fixed at 0.2 (2.5 parameter), and full three-parameter logistic models. Alternatives were tested against a database of 2,697 people (including the calibration group) who had taken the full 80-item test. Results indicate that adaptive testing scores are relatively robust to differences in IRT models and parameters. The full three-parameter model was the best theoretical match to the test and gave the best practical results, but the 2.5 parameter model results were not much different. Five tables present analysis results. (Contains 3 references.) (SLD)
Publication Type: Reports - Research; Speeches/Meeting Papers
Education Level: N/A
Authoring Institution: N/A
Identifiers: Calibration; Data Conversion; Three Parameter Model; Two Parameter Model
Note: Paper presented at the Annual Meeting of the American Educational Research Association (New Orleans, LA, April 4-8, 1994).