NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing all 8 results
Peer reviewed Peer reviewed
Stocking, Martha L.; Lawrence, Ida; Feigenbaum, Miriam; Jirele, Thomas; Lewis, Charles; Van Essen, Thomas – Journal of Educational Measurement, 2002
Constructed four different kinds of test sections using three methods of test assembly that incorporate the goals of simultaneous moderation of the impact of gender, African American status, and Hispanic-American status, resulting in 10 test forms completed by at least 7,000 test takers per form. Discusses the effects of moderating impact in this…
Descriptors: Black Students, Higher Education, Hispanic American Students, Sex Differences
Peer reviewed Peer reviewed
Willingham, Warren W.; Pollack, Judith M.; Lewis, Charles – Journal of Educational Measurement, 2002
Proposed a framework of possible differences between grades and test scores and tested the framework with data on 8,454 high school seniors from the National Education Longitudinal Study. Identified differences and correlations among achievement factors. Differences between grades and tests give these measures complementary strengths in…
Descriptors: Academic Achievement, Correlation, Elementary Secondary Education, Grades (Scholastic)
Peer reviewed Peer reviewed
Zwick, Rebecca; Thayer, Dorothy T.; Lewis, Charles – Journal of Educational Measurement, 1999
Developed an empirical Bayes enhancement to Mantel-Haenszel (MH) analysis of differential item functioning (DIF) in which it is assumed that the MH statistics are normally distributed and that the prior distribution of underlying DIF parameters is also normal. (Author/SLD)
Descriptors: Bayesian Statistics, Item Bias, Statistical Distributions, Test Items
Peer reviewed Peer reviewed
Stocking, Martha L.; Jirele, Thomas; Lewis, Charles; Swanson, Len – Journal of Educational Measurement, 1998
Constructed a pool of items from operational tests of mathematics to investigate the feasibility of using automated-test-assembly (ATA) methods to moderate simultaneously possibly irrelevant differences between the performance of women and men and African-American and White test takers. Discusses the usefulness of ATA. (SLD)
Descriptors: Automation, Computer Assisted Testing, Item Banks, Mathematics Tests
Peer reviewed Peer reviewed
Bridgeman, Brent; Lewis, Charles – Journal of Educational Measurement, 1996
A reanalysis of the data considered by H. Wainer and L. Steinberg (1992) shows that a more appropriate composite indicator made up of Scholastic Aptitude Test mathematics score and high school grade point average demonstrates minuscule gender differences for both calculus and precalculus courses. (SLD)
Descriptors: College Entrance Examinations, College Freshmen, Females, Grade Point Average
Peer reviewed Peer reviewed
Wainer, Howard; Lewis, Charles – Journal of Educational Measurement, 1990
Three different applications of the testlet concept are presented, and the psychometric models most suitable for each application are described. Difficulties that testlets can help overcome include (1) context effects; (2) item ordering; and (3) content balancing. Implications for test construction are discussed. (SLD)
Descriptors: Algorithms, Computer Assisted Testing, Elementary Secondary Education, Item Response Theory
Peer reviewed Peer reviewed
Livingston, Samuel A.; Lewis, Charles – Journal of Educational Measurement, 1995
A method is presented for estimating the accuracy and consistency of classifications based on test scores. The reliability of the score is used to estimate effective test length in terms of discrete items. The true-score distribution is estimated by fitting a four-parameter beta model. (SLD)
Descriptors: Classification, Estimation (Mathematics), Scores, Statistical Distributions
Peer reviewed Peer reviewed
Bridgeman, Brent; Lewis, Charles – Journal of Educational Measurement, 1994
Examination of the correlation between multiple-choice and essay portions of the College Board Advanced Placement (AP) examinations with grades of first-year students from 32 colleges (largest sample=6,243) shows the best correlation for multiple choice tests for 2 examinations, with multiple choice and essay performing nearly equally for the…
Descriptors: Biology, College Freshmen, Correlation, English