NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing all 14 results
Peer reviewed Peer reviewed
Direct linkDirect link
Dorans, Neil J. – Journal of Educational Measurement, 2013
van der Linden (this issue) uses words differently than Holland and Dorans. This difference in language usage is a source of some confusion in van der Linden's critique of what he calls equipercentile equating. I address these differences in language. van der Linden maintains that there are only two requirements for score equating. I maintain…
Descriptors: Equated Scores, Language Usage, Statistical Distributions
Peer reviewed Peer reviewed
Direct linkDirect link
Dorans, Neil J.; Middleton, Kyndra – Journal of Educational Measurement, 2012
The interpretability of score comparisons depends on the design and execution of a sound data collection plan and the establishment of linkings between these scores. When comparisons are made between scores from two or more assessments that are built to different specifications and are administered to different populations under different…
Descriptors: Tests, Equated Scores, Test Interpretation, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Jinghua; Dorans, Neil J. – Journal of Educational Measurement, 2012
At times, the same set of test questions is administered under different measurement conditions that might affect the psychometric properties of the test scores enough to warrant different score conversions for the different conditions. We propose a procedure for assessing the practical equivalence of conversions developed for the same set of test…
Descriptors: Measurement, Test Items, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Puhan, Gautam; Moses, Timothy P.; Yu, Lei; Dorans, Neil J. – Journal of Educational Measurement, 2009
This study examined the extent to which log-linear smoothing could improve the accuracy of differential item functioning (DIF) estimates in small samples of examinees. Examinee responses from a certification test were analyzed using White examinees in the reference group and African American examinees in the focal group. Using a simulation…
Descriptors: Test Items, Reference Groups, Testing Programs, Raw Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Jinghua; Cahn, Miriam F.; Dorans, Neil J. – Journal of Educational Measurement, 2006
The College Board's SAT[R] data are used to illustrate how the score equity assessment (SEA) can help inform the program about equatability. SEA is used to examine whether the content change(s) to the revised new SAT result in differential linking functions across gender groups. Results of population sensitivity analyses are reported on the…
Descriptors: Aptitude Tests, Comparative Analysis, Gender Differences, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Dorans, Neil J. – Journal of Educational Measurement, 2004
Score equity assessment (SEA) is introduced, and placed within a fair assessment context that includes differential prediction or fair selection and differential item functioning. The notion of subpopulation invariance of linking functions is central to the assessment of score equity, just as it has been for differential item functioning and…
Descriptors: Prediction, Scores, Calculus, Advanced Placement
Peer reviewed Peer reviewed
Dorans, Neil J. – Journal of Educational Measurement, 2002
Describes the process used to produce the conversions that take scores from the original Scholastic Assessment Test (SAT) scales to recentered scales in which the reference group scores are centered near the midpoint of the score-reporting range. Also describes the performance of the 1990 reference group and discusses issues related to…
Descriptors: College Entrance Examinations, High School Students, High Schools, Scores
Peer reviewed Peer reviewed
Dorans, Neil J.; Holland, Paul W. – Journal of Educational Measurement, 2000
Studied the degree to which equating functions failed to demonstrate population invariance across subpopulations, using two root-mean-square difference measures of the degree to which functions used to link two tests computed on subpopulations differ from the linking function for the whole population. Illustrated the ideas using data from the…
Descriptors: College Entrance Examinations, Equated Scores, Test Construction
Peer reviewed Peer reviewed
Dorans, Neil J.; Kingston, Neal M. – Journal of Educational Measurement, 1985
Since The Graduate Record Examination-Verbal measures two factors (reading comprehension and discrete verbal ability), the unidimensionality of item response theory is violated. The impact of this violation was examined by comparing three ability estimates: reading, discrete, and all verbal. Both dimensions were highly correlated; the impact was…
Descriptors: College Entrance Examinations, Factor Structure, Graduate Study, Higher Education
Peer reviewed Peer reviewed
Dorans, Neil J.; Kulick, Edward – Journal of Educational Measurement, 1986
The standardization method for assessing unexpected differential item performance or differential item functioning is introduced. Findings of five studies are summarized, in which the statistical method of standardization is used to look for unexpected differences in item performance across different subpopulations of the Scholastic Aptitude Test.…
Descriptors: Groups, Item Analysis, Sociometric Techniques, Standardized Tests
Peer reviewed Peer reviewed
Dorans, Neil J. – Journal of Educational Measurement, 1986
The analytical decomposition demonstrates how the effects of item characteristics, test properties, individual examinee responses, and rounding rules combine to produce the item deletion effect on the equating/scaling function and candidate scores. The empirical portion of the report illustrates the effects of item deletion on reported score…
Descriptors: Difficulty Level, Equated Scores, Item Analysis, Latent Trait Theory
Peer reviewed Peer reviewed
Dorans, Neil J.; Livingston, Samuel A. – Journal of Educational Measurement, 1987
This study investigated the hypothesis that females who score high on the Mathematical portion of Scholastic Aptitude Test do so because they have high verbal skills, whereas some males score high on the mathematics despite their relatively low verbal skills. Evidence for and against the hypothesis was observed. (Author/JAZ)
Descriptors: College Entrance Examinations, Females, High Schools, Hypothesis Testing
Peer reviewed Peer reviewed
Schmitt, Alicia P.; Dorans, Neil J. – Journal of Educational Measurement, 1990
Recent findings on differential item functioning (DIF) for minority examinees (Asian Americans, Hispanics, and Blacks) taking the Scholastic Aptitude Test (SAT) are presented, and the standardization approach to assessing DIF is described. Item characteristics related to DIF that generalize across ethnic groups are discussed. (SLD)
Descriptors: Asian Americans, Black Students, College Applicants, College Entrance Examinations
Peer reviewed Peer reviewed
Dorans, Neil J.; And Others – Journal of Educational Measurement, 1992
The standardization approach to comprehensive differential item functioning is described and contrasted with the log-linear approach to differential distractor functioning and the item-response-theory-based approach to differential alternative functioning. Data from an edition of the Scholastic Aptitude Test illustrate application of the approach…
Descriptors: Black Students, College Entrance Examinations, Comparative Testing, Distractors (Tests)