ERIC Number: ED483397
Record Type: Non-Journal
Publication Date: 2004-May
The Behavior Of Linking Items In Test Equating. CSE Report 630
Haertel, Edward H.
US Department of Education
Large-scale testing programs often require multiple forms to maintain test security over time or to enable the measurement of change without repeating the identical questions. The comparability of scores across forms is consequential: Students are admitted to colleges based on their test scores, and the meaning of a given scale score one year should be the same as for the previous year. Agencies set scale-score cut points defining passing levels for professional certification, and fairness requires that these standards be held constant over time. Large-scale evaluations or comparisons of educational programs may require pretest and posttest scale scores in a common metric. In short, to allow interchangeable use of alternate forms of tests built to the same content and statistical specifications, scores based on different sets of items must often be placed on a common scale, a process called test equating.
Descriptors: Measurement, Testing Programs, Equated Scores, Test Use, Student Evaluation, Statistical Analysis, Standards, Standardized Tests
National Center for Research on Evaluation, Standards, and Student Testing (CRESST), Graduate School of Education & Information Studies, University of California, Los Angeles, Los Angeles, CA 90095-1522. Tel: 310-206-1532.
Publication Type: Reports - Research
Education Level: N/A
Sponsor: Institute of Education Sciences (IES), Washington, DC.
Authoring Institution: Center for Research on Evaluation, Standards, and Student Testing, Los Angeles, CA.
Grant or Contract Numbers: N/A