NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Oliveri, Maria Elena; Lawless, Rene; Robin, Frederic; Bridgeman, Brent – Applied Measurement in Education, 2018
We analyzed a pool of items from an admissions test for differential item functioning (DIF) for groups based on age, socioeconomic status, citizenship, or English language status using Mantel-Haenszel and item response theory. DIF items were systematically examined to identify its possible sources by item type, content, and wording. DIF was…
Descriptors: Test Bias, Comparative Analysis, Item Banks, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Buzick, Heather; Oliveri, Maria Elena; Attali, Yigal; Flor, Michael – Applied Measurement in Education, 2016
Automated essay scoring is a developing technology that can provide efficient scoring of large numbers of written responses. Its use in higher education admissions testing provides an opportunity to collect validity and fairness evidence to support current uses and inform its emergence in other areas such as K-12 large-scale assessment. In this…
Descriptors: Essays, Learning Disabilities, Attention Deficit Hyperactivity Disorder, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Bridgeman, Brent; Trapani, Catherine; Attali, Yigal – Applied Measurement in Education, 2012
Essay scores generated by machine and by human raters are generally comparable; that is, they can produce scores with similar means and standard deviations, and machine scores generally correlate as highly with human scores as scores from one human correlate with scores from another human. Although human and machine essay scores are highly related…
Descriptors: Scoring, Essay Tests, College Entrance Examinations, High Stakes Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Bridgeman, Brent; Burton, Nancy; Cline, Frederick – Applied Measurement in Education, 2009
Descriptions of validity results based solely on correlation coefficients or percent of the variance accounted for are not merely difficult to interpret, they are likely to be misinterpreted. Predictors that apparently account for a small percent of the variance may actually be highly important from a practical perspective. This study combined two…
Descriptors: Predictive Validity, College Entrance Examinations, Graduate Study, Grade Point Average
Peer reviewed Peer reviewed
Direct linkDirect link
Hardison, Chaitra M.; Sackett, Paul R. – Applied Measurement in Education, 2008
Despite the growing use of writing assessments in standardized tests, little is known about coaching effects on writing assessments. Therefore, this study tested the effects of short-term coaching on standardized writing tests, and the transfer of those effects to other writing genres. College freshmen were randomly assigned to either training…
Descriptors: Control Groups, Group Membership, College Freshmen, Writing Tests
Peer reviewed Peer reviewed
Enright, Mary K.; Morley, Mary; Sheehan, Kathleen M. – Applied Measurement in Education, 2002
Studied the impact of systematic item feature variation on item statistical characteristics and the degree to which such information could be used as collateral information to supplement examinee performance data and reduce pretest sample size by generating 2 families of 48 word problem variants for the Graduate Record Examinations. Results with…
Descriptors: College Entrance Examinations, Sample Size, Statistical Analysis, Test Construction
Peer reviewed Peer reviewed
Powers, Donald E.; Bennett, Randy Elliot – Applied Measurement in Education, 1999
Explored how allowing examinees to select test questions affected examinee performance and test characteristics for a measure of ability to generate hypotheses about a situation. Results with 2,429 examinees who elected the choice condition on the Graduate Record Examination suggest that items are differentially attractive to examinees. (SLD)
Descriptors: Ability, College Students, Higher Education, Responses
Peer reviewed Peer reviewed
Scheuneman, Janice Dowd; Grima, Angela – Applied Measurement in Education, 1997
Differential item functioning for female-male and black-white groups and two samples of white males who differed in mathematical training were studied using data from 104 items from the Graduate Record Examination. Verbal properties of items were associated with differential performance of men and women but not of blacks and whites. (SLD)
Descriptors: Black Students, Females, Item Bias, Mathematics Achievement