Publication Date
| In 2015 | 0 |
| Since 2014 | 0 |
| Since 2011 (last 5 years) | 3 |
| Since 2006 (last 10 years) | 5 |
| Since 1996 (last 20 years) | 8 |
Descriptor
| College Entrance Examinations | 7 |
| Test Items | 6 |
| Equated Scores | 4 |
| Scores | 4 |
| High Schools | 3 |
| Item Analysis | 3 |
| Item Response Theory | 3 |
| Black Students | 2 |
| Evaluation Methods | 2 |
| High School Students | 2 |
| More ▼ | |
Source
| Journal of Educational… | 14 |
Author
| Dorans, Neil J. | 14 |
| Liu, Jinghua | 2 |
| Cahn, Miriam F. | 1 |
| Holland, Paul W. | 1 |
| Kingston, Neal M. | 1 |
| Kulick, Edward | 1 |
| Livingston, Samuel A. | 1 |
| Middleton, Kyndra | 1 |
| Moses, Timothy P. | 1 |
| Puhan, Gautam | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 14 |
| Reports - Research | 7 |
| Reports - Evaluative | 5 |
| Reports - Descriptive | 2 |
| Opinion Papers | 1 |
| Speeches/Meeting Papers | 1 |
Education Level
Audience
Showing all 14 results
Dorans, Neil J. – Journal of Educational Measurement, 2013
van der Linden (this issue) uses words differently than Holland and Dorans. This difference in language usage is a source of some confusion in van der Linden's critique of what he calls equipercentile equating. I address these differences in language. van der Linden maintains that there are only two requirements for score equating. I maintain…
Descriptors: Equated Scores, Language Usage, Statistical Distributions
Dorans, Neil J.; Middleton, Kyndra – Journal of Educational Measurement, 2012
The interpretability of score comparisons depends on the design and execution of a sound data collection plan and the establishment of linkings between these scores. When comparisons are made between scores from two or more assessments that are built to different specifications and are administered to different populations under different…
Descriptors: Tests, Equated Scores, Test Interpretation, Validity
Liu, Jinghua; Dorans, Neil J. – Journal of Educational Measurement, 2012
At times, the same set of test questions is administered under different measurement conditions that might affect the psychometric properties of the test scores enough to warrant different score conversions for the different conditions. We propose a procedure for assessing the practical equivalence of conversions developed for the same set of test…
Descriptors: Measurement, Test Items, Psychometrics
Puhan, Gautam; Moses, Timothy P.; Yu, Lei; Dorans, Neil J. – Journal of Educational Measurement, 2009
This study examined the extent to which log-linear smoothing could improve the accuracy of differential item functioning (DIF) estimates in small samples of examinees. Examinee responses from a certification test were analyzed using White examinees in the reference group and African American examinees in the focal group. Using a simulation…
Descriptors: Test Items, Reference Groups, Testing Programs, Raw Scores
Liu, Jinghua; Cahn, Miriam F.; Dorans, Neil J. – Journal of Educational Measurement, 2006
The College Board's SAT[R] data are used to illustrate how the score equity assessment (SEA) can help inform the program about equatability. SEA is used to examine whether the content change(s) to the revised new SAT result in differential linking functions across gender groups. Results of population sensitivity analyses are reported on the…
Descriptors: Aptitude Tests, Comparative Analysis, Gender Differences, Scores
Dorans, Neil J. – Journal of Educational Measurement, 2004
Score equity assessment (SEA) is introduced, and placed within a fair assessment context that includes differential prediction or fair selection and differential item functioning. The notion of subpopulation invariance of linking functions is central to the assessment of score equity, just as it has been for differential item functioning and…
Descriptors: Prediction, Scores, Calculus, Advanced Placement
Peer reviewedDorans, Neil J. – Journal of Educational Measurement, 2002
Describes the process used to produce the conversions that take scores from the original Scholastic Assessment Test (SAT) scales to recentered scales in which the reference group scores are centered near the midpoint of the score-reporting range. Also describes the performance of the 1990 reference group and discusses issues related to…
Descriptors: College Entrance Examinations, High School Students, High Schools, Scores
Peer reviewedDorans, Neil J.; Holland, Paul W. – Journal of Educational Measurement, 2000
Studied the degree to which equating functions failed to demonstrate population invariance across subpopulations, using two root-mean-square difference measures of the degree to which functions used to link two tests computed on subpopulations differ from the linking function for the whole population. Illustrated the ideas using data from the…
Descriptors: College Entrance Examinations, Equated Scores, Test Construction
Peer reviewedDorans, Neil J.; Kingston, Neal M. – Journal of Educational Measurement, 1985
Since The Graduate Record Examination-Verbal measures two factors (reading comprehension and discrete verbal ability), the unidimensionality of item response theory is violated. The impact of this violation was examined by comparing three ability estimates: reading, discrete, and all verbal. Both dimensions were highly correlated; the impact was…
Descriptors: College Entrance Examinations, Factor Structure, Graduate Study, Higher Education
Peer reviewedDorans, Neil J.; Kulick, Edward – Journal of Educational Measurement, 1986
The standardization method for assessing unexpected differential item performance or differential item functioning is introduced. Findings of five studies are summarized, in which the statistical method of standardization is used to look for unexpected differences in item performance across different subpopulations of the Scholastic Aptitude Test.…
Descriptors: Groups, Item Analysis, Sociometric Techniques, Standardized Tests
Peer reviewedDorans, Neil J. – Journal of Educational Measurement, 1986
The analytical decomposition demonstrates how the effects of item characteristics, test properties, individual examinee responses, and rounding rules combine to produce the item deletion effect on the equating/scaling function and candidate scores. The empirical portion of the report illustrates the effects of item deletion on reported score…
Descriptors: Difficulty Level, Equated Scores, Item Analysis, Latent Trait Theory
Peer reviewedDorans, Neil J.; Livingston, Samuel A. – Journal of Educational Measurement, 1987
This study investigated the hypothesis that females who score high on the Mathematical portion of Scholastic Aptitude Test do so because they have high verbal skills, whereas some males score high on the mathematics despite their relatively low verbal skills. Evidence for and against the hypothesis was observed. (Author/JAZ)
Descriptors: College Entrance Examinations, Females, High Schools, Hypothesis Testing
Peer reviewedSchmitt, Alicia P.; Dorans, Neil J. – Journal of Educational Measurement, 1990
Recent findings on differential item functioning (DIF) for minority examinees (Asian Americans, Hispanics, and Blacks) taking the Scholastic Aptitude Test (SAT) are presented, and the standardization approach to assessing DIF is described. Item characteristics related to DIF that generalize across ethnic groups are discussed. (SLD)
Descriptors: Asian Americans, Black Students, College Applicants, College Entrance Examinations
Peer reviewedDorans, Neil J.; And Others – Journal of Educational Measurement, 1992
The standardization approach to comprehensive differential item functioning is described and contrasted with the log-linear approach to differential distractor functioning and the item-response-theory-based approach to differential alternative functioning. Data from an edition of the Scholastic Aptitude Test illustrate application of the approach…
Descriptors: Black Students, College Entrance Examinations, Comparative Testing, Distractors (Tests)

Direct link
