ERIC Number: ED171776
Record Type: RIE
Publication Date: 1979-Mar
Reference Count: 0
Improving the Predictive Ability of Placement Tests Using the Rasch Model for Scoring.
Smith, Richard M.; Mitchell, Virginia P.
To improve the accuracy of college placement, Rasch scoring and person-fit statistics on the Comparative Guidance and Placement test (CGP) was compared to the traditional right-only scoring. Correlations were calculated between English and mathematics course grades and scores of 1,448 entering freshmen on the reading, writing, and mathematics sections of the CGP. Results of correlating the three score estimates (right only, Rasch, and person-fit) with final course grades were mixed--it was impossible to identify any systematic differences among the correlations. Before disregarding the Rasch model, it should be recognized that the effect of both traditional and Rasch scoring is mashed by the fact that course grade is not an interval scale; discriminant analysis cound be a solution to this problem. In defense of the model, the person-fit analysis was sensitive enough to detect even mild cases of misfit and to recommend score corrections. The BICAL computer program routine for item analysis and recalibration was recommended for test revision when inappropriate items could not be removed from standardized tests. Future studies should develop a test-revising algorithm for the PANAL computer program to provide a completely automated system of misfit identification, test revision, and ability re-estimation. (CP)
Descriptors: Academic Ability, Computer Programs, Difficulty Level, Goodness of Fit, Grade Prediction, Higher Education, Item Analysis, Mathematical Models, Predictive Measurement, Research Reports, Scoring Formulas, Statistical Studies, Student Characteristics, Student Placement, Tables (Data), Test Items, Test Validity
Publication Type: Speeches/Meeting Papers; Reports - Research
Education Level: N/A
Authoring Institution: N/A
Identifiers: Comparative Guidance and Placement Program; Rasch Model
Note: Paper presented at the Annual Meeting of the National Council on Measurement in Education (San Francisco, California, April, 1979)