ERIC Number: ED380786
Record Type: Non-Journal
Publication Date: 1995-Apr
Alternative Approaches to Vocabulary Assessment. Technical Report No. 607.
Stallman, Anne C.; And Others
Interviews with children about their knowledge of a set of words was used to examine the concurrent validity of three paper-and-pencil measures of knowledge of these words--a standardized vocabulary test and two experimenter-designed tests. One experimenter-designed test, the Levels test, had three multiple-choice items per word that targeted three different levels of word knowledge. The other was a forced-choice contexts test with five items per word, each requiring a decision about whether the word was used appropriately in the context. Subjects were 50 students from two heterogeneously grouped fifth-grade classrooms in a midwestern school district. All three paper-and-pencil measures showed acceptable levels of reliability. When subjects were used as the unit of analysis, the interview was more highly correlated with the standardized test and the Levels test than with the Contexts test. When the word was used as the unit of analysis, the interview correlated more highly with the Contexts and the Levels test than with the standardized test. These results are interpreted as indicating that standardized measures are more effective at discriminating among students upon the basis of their overall ability, but less accurate as measures of how much the students know about particular words. The Contexts test has the advantages of the highest reliability of the three measures, as well as the greatest instructional validity. (Contains 25 references and 7 tables of data.) (Author/RS)
Publication Type: Reports - Research
Education Level: N/A
Authoring Institution: Center for the Study of Reading, Urbana, IL.