NotesFAQContact Us
Collection
Advanced
Search Tips
ERIC Number: ED270473
Record Type: Non-Journal
Publication Date: 1986-Apr
Pages: 13
Abstractor: N/A
ISBN: N/A
ISSN: N/A
EISSN: N/A
Practical Questions about Item Response Models in Large-Scale Assessment Programs.
Legg, Sue M.; Algina, James
This paper focuses on the questions which arise as test practitioners monitor score scales derived from latent trait theory. Large scale assessment programs are dynamic and constantly challenge the assumptions and limits of latent trait models. Even though testing programs evolve, test scores must remain reliable indicators of progress. Fundamental questions relate to the extent that score shifts may be due to changes in achievement or to the way in which achievement is measured. Over time a number of measurement concerns have been raised as item calibrations and score scales are monitored. These concerns are related to the effect on score scales due to item selection procedures and changes in the content of the tests. The following questions are discussed: (1) Can equating procedures accommodate changes in curriculum and test content? (2) What are the effects of variations in item format, population, and test administration? (3) What are the effects of different item difficulty distributions on score scales? (4) Which estimation procedure or latent trait model best fits the data? and (5) How can the meaning of test scores be enhanced? (PN)
Publication Type: Speeches/Meeting Papers; Reports - Evaluative
Education Level: N/A
Audience: Researchers
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A