Descriptor
Ability | 2 |
High School Students | 2 |
High Schools | 2 |
Multiple Choice Tests | 2 |
Test Items | 2 |
Comparative Analysis | 1 |
Comparative Testing | 1 |
Distractors (Tests) | 1 |
Estimation (Mathematics) | 1 |
Grade Point Average | 1 |
Item Response Theory | 1 |
More ▼ |
Publication Type
Journal Articles | 2 |
Reports - Research | 2 |
Speeches/Meeting Papers | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewed
Ndalichako, Joyce L.; Rogers, W. Todd – Educational and Psychological Measurement, 1997
Ability estimates obtained from applying finite state score theory, item response models, and classical test theory to score multiple-choice items were compared using responses of 1,230 examinees. Scoring models provided essentially the same ranking of examinees, but ease of use and interpretation support the use of the classical test model. (SLD)
Descriptors: Ability, Comparative Analysis, Estimation (Mathematics), High School Students
Peer reviewed
Trevisan, Michael S.; And Others – Educational and Psychological Measurement, 1991
The reliability and validity of multiple-choice tests were computed as a function of the number of options per item and student ability for 435 parochial high school juniors, who were administered the Washington Pre-College Test Battery. Results suggest the efficacy of the three-option item. (SLD)
Descriptors: Ability, Comparative Testing, Distractors (Tests), Grade Point Average