ERIC Number: ED395943
Record Type: Non-Journal
Publication Date: 1989-Jun
Reference Count: N/A
Developing and Evaluating a Machine-Scorable, Constrained Constructed-Response Item.
Braun, Henry I.; And Others
The use of constructed response items in large scale standardized testing has been hampered by the costs and difficulties associated with obtaining reliable scores. The advent of expert systems may signal the eventual removal of this impediment. This study investigated the accuracy with which expert systems could score a new, non-multiple choice item type. The item type presents a faulty solution to a computer programming problem and asks the student to correct the solution. This item type was administered to a sample of high school seniors enrolled in an Advanced Placement course in Computer Science who also took the Advanced Placement Computer Science (APCS) Test. Results from 737 students for the first problem and 734 of these students for the second problem indicate that the expert systems were able to produce scores for between 82% and 97% of the solutions encountered and to display high agreement with a human reader on which solutions were and were not correct. Diagnoses of the specific errors produced by students were less accurate. Correlations with scores on the objective and free-response selections of the APCS examination were moderate. Implications for additional research and for testing practice are offered. Appendix A presents the faulty solutions problems, and Appendix B gives the correlation matrices for the APCS and the problems. (Contains 10 tables and 17 references.) (Author/SLD)
Publication Type: Reports - Evaluative
Education Level: N/A
Authoring Institution: Educational Testing Service, Princeton, NJ.
Identifiers - Assessments and Surveys: Advanced Placement Examinations (CEEB)