ERIC Number: ED331879
Record Type: Non-Journal
Publication Date: 1990-Aug
Reference Count: N/A
Influence of Item Parameter Errors in Test Development.
Hambleton, Ronald K.; And Others
Item response theory (IRT) model parameter estimates have considerable merit and open up new directions for test development, but misleading results are often obtained because of errors in the item parameter estimates. The problem of the effects of item parameter estimation errors on the test development process is discussed, and the seriousness of the problem is demonstrated with simulated data sets. Solutions are offered for this problem in test development practice, which arises because item information functions are determined by item parameter values that in turn contain error. When the best items are selected on the basis of their statistical characteristics, there is a tendency to capitalize on chance due to errors in the item parameter estimates; among the generally promising test items, items with parameter estimates that are the most overestimated are also the most likely to be selected. As a result, the test falls short of the test desired or expected. Simulation studies using a hypothetical pool of 150 test items with sample sizes of 1,000 and 400 confirmed that tests do not perform as well as expected when items are selected to match a target test information function and standard errors are correspondingly underestimated. The following suggestions for eliminating this problem are presented: (1) use large samples in item calibration to gain precision in item parameter estimates; (2) revise the extreme item parameter estimates by subtracting one or two standard errors from their values; and (3) exceed the desired target information by 20 to 30%. Two tables and six graphs supplement the discussion. (SLD)
Publication Type: Reports - Evaluative; Speeches/Meeting Papers
Education Level: N/A
Sponsor: Graduate Management Admission Council, Princeton, NJ.
Authoring Institution: N/A