NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 16 to 30 of 33 results
Peer reviewed Peer reviewed
Direct linkDirect link
Sireci, Stephen G. – Journal of Applied Testing Technology, 2009
The articles in this special issue of the "Journal of Applied Testing Technology" represent significant steps forward in the area of evaluating the validity of methods for assessing the educational achievement of students with disabilities. The studies address some of the most difficult student groups to assess--students with learning…
Descriptors: Learning Disabilities, Reading Tests, Evaluation Methods, Special Needs Students
Peer reviewed Peer reviewed
Direct linkDirect link
Abedi, Jamal – Journal of Applied Testing Technology, 2009
English language learners with disabilities (ELLWD) face many challenges in their academic career. Learning a new language and coping with their disabilities create obstacles in their academic progress. Variables relegating accessibility of assessments for students with disabilities and ELL students may seriously hinder the academic performance of…
Descriptors: Reading Achievement, Second Language Learning, Disabilities, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Steinberg, Jonathan; Cline, Frederick; Ling, Guangming; Cook, Linda; Tognatta, Namrata – Journal of Applied Testing Technology, 2009
This study examines the appropriateness of a large-scale state standards-based English-Language Arts (ELA) assessment for students who are deaf or hard of hearing by comparing the internal test structures for these students to students without disabilities. The Grade 4 and 8 ELA assessments were analyzed via a series of parcel-level exploratory…
Descriptors: Test Bias, Language Arts, State Standards, Partial Hearing
Peer reviewed Peer reviewed
Direct linkDirect link
Laitusis, Cara Cahalan; Maneckshana, Behroz; Monfils, Lora; Ahlgrim-Delzell, Lynn – Journal of Applied Testing Technology, 2009
The purpose of this study was to examine Differential Item Functioning (DIF) by disability groups on an on-demand performance assessment for students with severe cognitive impairments. Researchers examined the presence of DIF for two comparisons. One comparison involved students with severe cognitive impairments who served as the reference group…
Descriptors: Test Bias, Test Items, Autism, Performance Based Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Camara, Wayne – Journal of Applied Testing Technology, 2009
The five papers in this special issue of the "Journal of Applied Testing Technology" address fundamental issues of validity when tests are modified or accommodations are provided to English Language Learners (ELL) or students with disabilities. Three papers employed differential item functioning (DIF) and factor analysis and found the underlying…
Descriptors: Second Language Learning, Factor Analysis, English (Second Language), Cognitive Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Russell, Michael; Kavanaugh, Maureen; Masters, Jessica; Higgins, Jennifer; Hoffmann, Thomas – Journal of Applied Testing Technology, 2009
Many students who are deaf or hard-of-hearing are eligible for a signing accommodation for state and other standardized tests. The signing accommodation, however, presents several challenges for testing programs that attempt to administer tests under standardized conditions. One potential solution for many of these challenges is the use of…
Descriptors: Testing Programs, Student Attitudes, Standardized Tests, Academic Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Moen, Ross; Liu, Kristi; Thurlow, Martha; Lekwa, Adam; Scullin, Sarah; Hausmann, Kristin – Journal of Applied Testing Technology, 2009
Some students are less accurately measured by typical reading tests than other students. By asking teachers to identify students whose performance on state reading tests would likely underestimate their reading skills, this study sought to learn about characteristics of less accurately measured students while also evaluating how well teachers can…
Descriptors: Reading Tests, Academic Achievement, Interviews, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Cook, Linda; Eignor, Daniel; Steinberg, Jonathan; Sawaki, Yasuyo; Cline, Frederick – Journal of Applied Testing Technology, 2009
The purpose of this study was to investigate the impact of a read-aloud test change administered with the Gates-MacGinitie Reading Test (GMRT) on the underlying constructs measured by the Comprehension subtest. The study evaluated the factor structures for the Level 4 Comprehension subtest given to a sample of New Jersey fourth-grade students with…
Descriptors: Reading Comprehension, Learning Disabilities, Factor Structure, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrara, Steve; Perie, Marianne; Johnson, Eugene – Journal of Applied Testing Technology, 2008
Psychometricians continue to introduce new approaches to setting cut scores for educational assessments in an attempt to improve on current methods. In this paper we describe the Item-Descriptor (ID) Matching method, a method based on IRT item mapping. In ID Matching, test content area experts match items (i.e., their judgments about the knowledge…
Descriptors: Test Results, Test Content, Testing Programs, Educational Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Yu, Chong Ho; Jannasch-Pennell, Angel; DiGangi, Samuel – Journal of Applied Testing Technology, 2008
Since the introduction of the "No Child Left Behind Act," assessment has become a pre-dominant theme in the US K-12 system. However, making assessment results understandable and usable for the K-12 teachers has been a challenge. While test technology offered by various vendors has been widely implemented, technology of training for test…
Descriptors: Item Response Theory, Misconceptions, Computer Assisted Instruction, Inservice Teacher Education
Peer reviewed Peer reviewed
Direct linkDirect link
Rodeck, Elaine M.; Chin, Tzu-Yun; Davis, Susan L.; Plake, Barbara S. – Journal of Applied Testing Technology, 2008
This study examined the relationships between the evaluations obtained from standard setting panelists and changes in ratings between different rounds of a standard setting study that involved setting standards on different language versions of an exam. We investigated panelists' evaluations to determine if their perceptions of the standard…
Descriptors: Mathematics Tests, Standard Setting (Scoring), French, Evaluation Research
Peer reviewed Peer reviewed
Direct linkDirect link
Russell, Michael; Famularo, Lisa – Journal of Applied Testing Technology, 2008
Student assessment is an integral component of classroom instruction. Assessment is intended to help teachers identify what students are able to do and what content and skills students must develop further. State tests play an important role in guiding instruction. However, for some students, the tests may lead to inaccurate conclusions about…
Descriptors: Student Evaluation, Evaluation Research, Questionnaires, Mail Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, Nathan A. – Journal of Applied Testing Technology, 2008
The widespread application of personal computers to educational and psychological testing has substantially increased the number of test administration methodologies available to testing programs. Many of these mediums are referred to by their acronyms, such as CAT, CBT, CCT, and LOFT. The similarities between the acronyms and the methods…
Descriptors: Testing Programs, Psychological Testing, Classification, Educational Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Kobrin, Jennifer L.; Deng, Hui; Shaw, Emily J. – Journal of Applied Testing Technology, 2007
This study was designed to address two frequent criticisms of the SAT essay--that essay length is the best predictor of scores, and that there is an advantage in using more "sophisticated" examples as opposed to personal experience. The study was based on 2,820 essays from the first three administrations of the new SAT. Each essay was coded for…
Descriptors: Testing Programs, Computer Assisted Testing, Construct Validity, Writing Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Rabinowitz, Stanley; Ananda, Sri; Bell, Andrew – Journal of Applied Testing Technology, 2005
This paper focuses on this assessment issue: How do you increase the validity of assessments of ELL student performance on core academic content? We begin by exploring NCLB expectations for ELL assessments and an increasingly popular approach to meeting these requirements proposed by some states--translation of assessments into students' native…
Descriptors: Validity, Second Language Learning, English (Second Language), Federal Legislation
Pages: 1  |  2  |  3