NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing all 6 results
Peer reviewed Peer reviewed
Direct linkDirect link
Gattamorta, Karina A.; Penfield, Randall D. – Applied Measurement in Education, 2012
The study of measurement invariance in polytomous items that targets individual score levels is known as differential step functioning (DSF). The analysis of DSF requires the creation of a set of dichotomizations of the item response variable. There are two primary approaches for creating the set of dichotomizations to conduct a DSF analysis: the…
Descriptors: Measurement, Item Response Theory, Test Bias, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Penfield, Randall D.; Alvarez, Karina; Lee, Okhee – Applied Measurement in Education, 2009
The assessment of differential item functioning (DIF) in polytomous items addresses between-group differences in measurement properties at the item level, but typically does not inform which score levels may be involved in the DIF effect. The framework of differential step functioning (DSF) addresses this issue by examining between-group…
Descriptors: Test Bias, Classification, Test Items, Criteria
Peer reviewed Peer reviewed
Direct linkDirect link
Penfield, Randall D. – Applied Measurement in Education, 2007
A widely used approach for categorizing the level of differential item functioning (DIF) in dichotomous items is the scheme proposed by Educational Testing Service (ETS) based on a transformation of the Mantel-Haeszel common odds ratio. In this article two classification schemes for DIF in polytomous items (referred to as the P1 and P2 schemes)…
Descriptors: Simulation, Educational Testing, Test Bias, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Penfield, Randall D. – Applied Measurement in Education, 2006
This study applied the maximum expected information (MEI) and the maximum posterior-weighted information (MPI) approaches of computer adaptive testing item selection to the case of a test using polytomous items following the partial credit model. The MEI and MPI approaches are described. A simulation study compared the efficiency of ability…
Descriptors: Bayesian Statistics, Adaptive Testing, Computer Assisted Testing, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Penfield, Randall D.; Miller, Jeffrey M. – Applied Measurement in Education, 2004
As automated scoring of complex constructed-response examinations reaches operational status, the process of evaluating the quality of resultant scores, particularly in contrast to scores of expert human graders, becomes as complex as the data itself. Using a vignette from the Architectural Registration Examination (ARE), this article explores the…
Descriptors: Student Evaluation, Evaluation Methods, Content Validity, Scoring
Peer reviewed Peer reviewed
Penfield, Randall D. – Applied Measurement in Education, 2001
Compared the performance of three methods of assessing differential item functioning (DIF) across demographic groups, using: (1) the Mantel-Haenszel chi-square statistic with no adjustment to the alpha level; (2) the Mantel-Haenszel statistic with a Bonferroni adjusted alpha level; and (3) the generalized Mantel-Haenszel statistic. Simulation…
Descriptors: Chi Square, Demography, Item Bias, Power (Statistics)