Descriptor

Mathematical Models | 5 |

Test Items | 5 |

Achievement Tests | 3 |

Comparative Analysis | 2 |

Difficulty Level | 2 |

Error of Measurement | 2 |

Item Analysis | 2 |

Probability | 2 |

Test Reliability | 2 |

Analysis of Variance | 1 |

Computer Programs | 1 |

More ▼ |

Publication Type

Journal Articles | 5 |

Reports - Research | 5 |

Education Level

Audience

Location

Laws, Policies, & Programs

Assessments and Surveys

Iowa Tests of Educational… | 1 |

What Works Clearinghouse Rating

Peer reviewed

Feldt, Leonard S. – Educational and Psychological Measurement, 1984

The binomial error model includes form-to-form difficulty differences as error variance and leads to Ruder-Richardson formula 21 as an estimate of reliability. If the form-to-form component is removed from the estimate of error variance, the binomial model leads to KR 20 as the reliability estimate. (Author/BW)

Descriptors: Achievement Tests, Difficulty Level, Error of Measurement, Mathematical Formulas

Peer reviewed

Albanese, Mark A.; Forsyth, Robert A. – Educational and Psychological Measurement, 1984

The purpose of this study was to compare the relative robustness of the one-, two-, and modified two-parameter latent trait logistic models for the Iowa Tests of Educational Development. Results suggest that the modified two-parameter model may provide the best representation of the data. (Author/BW)

Descriptors: Achievement Tests, Comparative Analysis, Goodness of Fit, Item Analysis

Peer reviewed

Wilcox, Rand R. – Educational and Psychological Measurement, 1981

A formal framework is presented for determining which of the distractors of multiple-choice test items has a small probability of being chosen by a typical examinee. The framework is based on a procedure similar to an indifference zone formulation of a ranking and election problem. (Author/BW)

Descriptors: Mathematical Models, Multiple Choice Tests, Probability, Test Items

Peer reviewed

Huck, Schuyler W.; And Others – Educational and Psychological Measurement, 1981

Believing that examinee-by-item interaction should be conceptualized as true score variability rather than as a result of errors of measurement, Lu proposed a modification of Hoyt's analysis of variance reliability procedure. Via a computer simulation study, it is shown that Lu's approach does not separate interaction from error. (Author/RL)

Descriptors: Analysis of Variance, Comparative Analysis, Computer Programs, Difficulty Level

Peer reviewed

Wilcox, Rand R. – Educational and Psychological Measurement, 1979

Wilcox has described three probability models which characterize a single test item in terms of a population of examinees (ED 156 718). This note indicates indicates that similar models can be derived which characterize a single examinee in terms of an item domain. A numerical illustration is given. (Author/JKS)

Descriptors: Achievement Tests, Item Analysis, Mathematical Models, Probability