NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing all 13 results
Peer reviewed Peer reviewed
Direct linkDirect link
Kachchaf, Rachel; Solano-Flores, Guillermo – Applied Measurement in Education, 2012
We examined how rater language background affects the scoring of short-answer, open-ended test items in the assessment of English language learners (ELLs). Four native English and four native Spanish-speaking certified bilingual teachers scored 107 responses of fourth- and fifth-grade Spanish-speaking ELLs to mathematics items administered in…
Descriptors: Error of Measurement, English Language Learners, Scoring, Bilingual Teachers
Peer reviewed Peer reviewed
Direct linkDirect link
Taylor, Catherine S.; Lee, Yoonsun – Applied Measurement in Education, 2012
This was a study of differential item functioning (DIF) for grades 4, 7, and 10 reading and mathematics items from state criterion-referenced tests. The tests were composed of multiple-choice and constructed-response items. Gender DIF was investigated using POLYSIBTEST and a Rasch procedure. The Rasch procedure flagged more items for DIF than did…
Descriptors: Test Bias, Gender Differences, Reading Tests, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Cho, Hyun-Jeong; Lee, Jaehoon; Kingston, Neal – Applied Measurement in Education, 2012
This study examined the validity of test accommodation in third-eighth graders using differential item functioning (DIF) and mixture IRT models. Two data sets were used for these analyses. With the first data set (N = 51,591) we examined whether item type (i.e., story, explanation, straightforward) or item features were associated with item…
Descriptors: Testing Accommodations, Test Bias, Item Response Theory, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Engelhard, George, Jr.; Fincher, Melissa; Domaleski, Christopher S. – Applied Measurement in Education, 2011
This study examines the effects of two test administration accommodations on the mathematics performance of students within the context of a large-scale statewide assessment. The two test administration accommodations were resource guides and calculators. A stratified random sample of schools was selected to represent the demographic…
Descriptors: Testing Accommodations, Disabilities, High Stakes Tests, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Van Nijlen, Daniel; Janssen, Rianne – Applied Measurement in Education, 2011
The distinction between quantitative and qualitative differences in mastery is essential when monitoring student progress and is crucial for instructional interventions to deal with learning difficulties. Mixture item response theory (IRT) models can provide a convenient way to make the distinction between quantitative and qualitative differences…
Descriptors: Spelling, Indo European Languages, Vowels, Verbal Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Lauress L. – Applied Measurement in Education, 2010
The articles in this special issue make two important contributions to our understanding of the impact of accommodations on test score validity. First, they illustrate a variety of methods for collection and rigorous analyses of empirical data that can supplant expert judgment of the impact of accommodations. These methods range from internal…
Descriptors: Reading Achievement, Educational Assessment, Test Reliability, Learning Disabilities
Peer reviewed Peer reviewed
Direct linkDirect link
Taylor, Catherine S.; Lee, Yoonsun – Applied Measurement in Education, 2010
Item response theory (IRT) methods are generally used to create score scales for large-scale tests. Research has shown that IRT scales are stable across groups and over time. Most studies have focused on items that are dichotomously scored. Now Rasch and other IRT models are used to create scales for tests that include polytomously scored items.…
Descriptors: Measures (Individuals), Item Response Theory, Robustness (Statistics), Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Laitusis, Cara Cahalan – Applied Measurement in Education, 2010
This study examined the impact of a read-aloud accommodation on standardized test scores of reading comprehension at grades 4 and 8. Under a repeated measures design, students with and without reading-based learning disabilities took both a standard administration and a read-aloud administration of a reading comprehension test. Results show that…
Descriptors: Learning Disabilities, Standardized Tests, Scores, Academic Accommodations (Disabilities)
Peer reviewed Peer reviewed
Direct linkDirect link
Cook, Linda; Eignor, Daniel; Sawaki, Yasuyo; Steinberg, Jonathan; Cline, Frederick – Applied Measurement in Education, 2010
This study compared the underlying factors measured by a state standards-based grade 4 English-Language Arts (ELA) assessment given to several groups of students. The focus of the research was to gather evidence regarding whether or not the tests measured the same construct or constructs for students without disabilities who took the test under…
Descriptors: Language Arts, Educational Assessment, Grade 4, State Standards
Peer reviewed Peer reviewed
Direct linkDirect link
Lane, Suzanne; Zumbo, Bruno D.; Abedi, Jamal; Benson, Jeri; Dossey, John; Elliott, Stephen N.; Kane, Michael; Linn, Robert; Paredes-Ziker, Cindy; Rodriguez, Michael; Schraw, Gregg; Slattery, Jean; Thomas, Veronica; Willhoft, Joe – Applied Measurement in Education, 2009
Given the changing landscape of educational accountability at the local, state, and national levels, and the changes in the uses of the National Assessment of Educational Progress (NAEP), including the evolving uses of NAEP as a policy tool to interpret state assessment and accountability systems, an explicit statement of the current and potential…
Descriptors: National Competency Tests, Academic Achievement, Accountability, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Penfield, Randall D.; Alvarez, Karina; Lee, Okhee – Applied Measurement in Education, 2009
The assessment of differential item functioning (DIF) in polytomous items addresses between-group differences in measurement properties at the item level, but typically does not inform which score levels may be involved in the DIF effect. The framework of differential step functioning (DSF) addresses this issue by examining between-group…
Descriptors: Test Bias, Classification, Test Items, Criteria
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Do-Hong; Schneider, Christina; Siskind, Theresa – Applied Measurement in Education, 2009
This study examined the extent to which the underlying factor structure of the 2005 South Carolina Palmetto Achievement Challenge Tests (PACT) in science for grades 3, 4, and 5 was equivalent for students who were administered the test in a regular (standard) or accommodated form. Three accommodation groups were of interest: students who received…
Descriptors: Testing Accommodations, Science Tests, Elementary School Science, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Tong, Ye; Kolen, Michael J. – Applied Measurement in Education, 2007
A number of vertical scaling methodologies were examined in this article. Scaling variations included data collection design, scaling method, item response theory (IRT) scoring procedure, and proficiency estimation method. Vertical scales were developed for Grade 3 through Grade 8 for 4 content areas and 9 simulated datasets. A total of 11 scaling…
Descriptors: Achievement Tests, Scaling, Methods, Item Response Theory