NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing all 6 results
Peer reviewed Peer reviewed
Direct linkDirect link
Smith, Russell W.; Davis-Becker, Susan L.; O'Leary, Lisa S. – Journal of Applied Testing Technology, 2014
This article describes a hybrid standard setting method that combines characteristics of the Angoff (1971) and Bookmark (Mitzel, Lewis, Patz & Green, 2001) methods. The proposed approach utilizes strengths of each method while addressing weaknesses. An ordered item booklet, with items sorted based on item difficulty, is used in combination…
Descriptors: Standard Setting, Difficulty Level, Test Items, Rating Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Burke, Matthew; Devore, Richard; Stopek, Josh – Journal of Applied Testing Technology, 2013
This paper describes efforts to bring principled assessment design to a large-scale, high-stakes licensure examination by employing the frameworks of Assessment Engineering (AE), the Revised Bloom's Taxonomy (RBT), and Cognitive Task Analysis (CTA). The Uniform CPA Examination is practice-oriented and focuses on the skills of accounting. In…
Descriptors: Licensing Examinations (Professions), Accounting, Engineering, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Luecht, Richard M. – Journal of Applied Testing Technology, 2013
Assessment engineering is a new way to design and implement scalable, sustainable and ideally lower-cost solutions to the complexities of designing and developing tests. It represents a merger of sorts between cognitive task modeling and engineering design principles--a merger that requires some new thinking about the nature of score scales, item…
Descriptors: Engineering, Test Construction, Test Items, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Ying; Jiao, Hong; Lissitz, Robert W. – Journal of Applied Testing Technology, 2012
This study investigated the application of multidimensional item response theory (IRT) models to validate test structure and dimensionality. Multiple content areas or domains within a single subject often exist in large-scale achievement tests. Such areas or domains may cause multidimensionality or local item dependence, which both violate the…
Descriptors: Achievement Tests, Science Tests, Item Response Theory, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Steinberg, Jonathan; Cline, Frederick; Ling, Guangming; Cook, Linda; Tognatta, Namrata – Journal of Applied Testing Technology, 2009
This study examines the appropriateness of a large-scale state standards-based English-Language Arts (ELA) assessment for students who are deaf or hard of hearing by comparing the internal test structures for these students to students without disabilities. The Grade 4 and 8 ELA assessments were analyzed via a series of parcel-level exploratory…
Descriptors: Test Bias, Language Arts, State Standards, Partial Hearing
Peer reviewed Peer reviewed
Direct linkDirect link
Camara, Wayne – Journal of Applied Testing Technology, 2009
The five papers in this special issue of the "Journal of Applied Testing Technology" address fundamental issues of validity when tests are modified or accommodations are provided to English Language Learners (ELL) or students with disabilities. Three papers employed differential item functioning (DIF) and factor analysis and found the underlying…
Descriptors: Second Language Learning, Factor Analysis, English (Second Language), Cognitive Ability