ERIC Number: ED442842
Record Type: RIE
Publication Date: 1998-Apr-17
Some Considerations for Eliminating Biases in Ability Estimation in Computerized Adaptive Testing.
Item response theory (IRT) has been adapted as the theoretical foundation of computerized adaptive testing (CAT) for several decades. In applying IRT to CAT, there are certain considerations that are essential, and yet tend to be neglected. These essential issues are addressed in this paper, and then several ways of eliminating noise and bias in estimating the individual parameter, theta, of person "a" are proposed and discussed, so that accuracy and efficiency in ability estimation can be increased. The content validity of the ability dimension is emphasized, and the idea of core test items is proposed. Devices are suggested to eliminate noise from multiple-choice items by using the nonparametric estimation of operating characteristics effectively in pilot studies. The use of the normal ogive model is suggested instead of the three-parameter logistic model. It is further suggested that several graded response items be used at the beginning of the CAT to avoid the influence of bias and lack of information inherent in dichotomous response items. The Weighted Likelihood Estimate of T. Warm (1989) and its expanded form for general discrete responses are discussed as an effective method of eliminating bias in ability estimation, and the usefulness of Warm's weight function as a prior is discussed. Use of the modified test information function is suggested for the same purpose. (Contains 10 figures and 18 references.) (SLD)
Publication Type: Reports - Descriptive; Speeches/Meeting Papers
Education Level: N/A
Authoring Institution: N/A
Note: Paper presented at the Annual Meeting of the American Educational Research Association (San Diego, CA, April 13-17, 1998).