NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
Rogers, H. Jane; Hambleton, Ronald K. – Educational and Psychological Measurement, 1989
The validity of logistic test models and computer simulation methods for generating sampling distributions of item bias statistics was evaluated under the hypothesis of no item bias. Test data from 937 ninth-grade students were used to develop 7 steps for applying computer-simulated baseline statistics in test development. (SLD)
Descriptors: Computer Simulation, Educational Research, Evaluation Methods, Grade 9
Peer reviewed Peer reviewed
Smith, Richard M. – Educational and Psychological Measurement, 1994
Simulated data are used to assess the appropriateness of using separate calibration and between-fit approaches to detecting item bias in the Rasch rating scale model. Results indicate that Type I error rates for the null distribution hold even when there are different ability levels for reference and focal groups. (SLD)
Descriptors: Ability, Goodness of Fit, Identification, Item Bias
Peer reviewed Peer reviewed
Smith, Richard M. – Educational and Psychological Measurement, 1991
This study reports results of an investigation based on simulated data of the distributional properties of the item fit statistics that are commonly used in the Rasch model calibration programs as indices of the fit of responses to individual items to the measurement model. (SLD)
Descriptors: Computer Simulation, Equations (Mathematics), Goodness of Fit, Item Response Theory
Peer reviewed Peer reviewed
Smith, Richard M. – Educational and Psychological Measurement, 1996
The separate calibration t-test approach of B. Wright and M. Stone (1979) and the common calibration between-fit approach of B. Wright, R. Mead, and R. Draba (1976) appeared to have similar Type I error rates and similar power to detect item bias within a Rasch framework. (SLD)
Descriptors: Comparative Analysis, Goodness of Fit, Item Bias, Item Response Theory