NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing all 5 results
Peer reviewed Peer reviewed
Direct linkDirect link
Noble, Tracy; Rosebery, Ann; Suarez, Catherine; Warren, Beth; O'Connor, Mary Catherine – Applied Measurement in Education, 2014
English language learners (ELLs) and their teachers, schools, and communities face increasingly high-stakes consequences due to test score gaps between ELLs and non-ELLs. It is essential that the field of educational assessment continue to investigate the meaning of these test score gaps. This article discusses the findings of an exploratory study…
Descriptors: English Language Learners, Evidence, Educational Assessment, Achievement Gap
Peer reviewed Peer reviewed
Direct linkDirect link
Welsh, Megan E.; Eastwood, Melissa; D'Agostino, Jerome V. – Applied Measurement in Education, 2014
Teacher and school accountability systems based on high-stakes tests are ubiquitous throughout the United States and appear to be growing as a catalyst for reform. As a result, educators have increased the proportion of instructional time devoted to test preparation. Although guidelines for what constitutes appropriate and inappropriate test…
Descriptors: High Stakes Tests, Instruction, Test Preparation, Grade 3
Peer reviewed Peer reviewed
Direct linkDirect link
Taylor, Melinda Ann; Pastor, Dena A. – Applied Measurement in Education, 2013
Although federal regulations require testing students with severe cognitive disabilities, there is little guidance regarding how technical quality should be established. It is known that challenges exist with documentation of the reliability of scores for alternate assessments. Typical measures of reliability do little in modeling multiple sources…
Descriptors: Generalizability Theory, Alternative Assessment, Test Reliability, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Banks, Kathleen – Applied Measurement in Education, 2012
The purpose of this article is to illustrate a seven-step process for determining whether inferential reading items were more susceptible to cultural bias than literal reading items. The seven-step process was demonstrated using multiple-choice data from the reading portion of a reading/language arts test for fifth and seventh grade Hispanic,…
Descriptors: Reading Tests, Test Items, Standardized Tests, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Van Nijlen, Daniel; Janssen, Rianne – Applied Measurement in Education, 2011
The distinction between quantitative and qualitative differences in mastery is essential when monitoring student progress and is crucial for instructional interventions to deal with learning difficulties. Mixture item response theory (IRT) models can provide a convenient way to make the distinction between quantitative and qualitative differences…
Descriptors: Spelling, Indo European Languages, Vowels, Verbal Tests