NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 21 results
Peer reviewed Peer reviewed
Direct linkDirect link
Suh, Youngsuk; Talley, Anna E. – Applied Measurement in Education, 2015
This study compared and illustrated four differential distractor functioning (DDF) detection methods for analyzing multiple-choice items. The log-linear approach, two item response theory-model-based approaches with likelihood ratio tests, and the odds ratio approach were compared to examine the congruence among the four DDF detection methods.…
Descriptors: Test Bias, Multiple Choice Tests, Test Items, Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Steedle, Jeffrey T. – Applied Measurement in Education, 2014
Possible lack of motivation is a perpetual concern when tests have no stakes attached to performance. Specifically, the validity of test score interpretations may be compromised when examinees are unmotivated to exert their best efforts. Motivation filtering, a procedure that filters out apparently unmotivated examinees, was applied to the…
Descriptors: College Outcomes Assessment, Student Motivation, Sampling, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Sawyer, Richard – Applied Measurement in Education, 2013
Correlational evidence suggests that high school GPA is better than admission test scores in predicting first-year college GPA, although test scores have incremental predictive validity. The usefulness of a selection variable in making admission decisions depends in part on its predictive validity, but also on institutions' selectivity and…
Descriptors: High Schools, Grade Point Average, College Entrance Examinations, College Admission
Peer reviewed Peer reviewed
Direct linkDirect link
Boyd, Aimee M.; Dodd, Barbara; Fitzpatrick, Steven – Applied Measurement in Education, 2013
This study compared several exposure control procedures for CAT systems based on the three-parameter logistic testlet response theory model (Wang, Bradlow, & Wainer, 2002) and Masters' (1982) partial credit model when applied to a pool consisting entirely of testlets. The exposure control procedures studied were the modified within 0.10 logits…
Descriptors: Computer Assisted Testing, Item Response Theory, Test Construction, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Hayes, Heather; Embretson, Susan E. – Applied Measurement in Education, 2013
Online and on-demand tests are increasingly used in assessment. Although the main focus has been cheating and test security (e.g., Selwyn, 2008) the cross-setting equivalence of scores as a function of contrasting test conditions is also an issue that warrants attention. In this study, the impact of environmental and cognitive distractions, as…
Descriptors: College Students, Computer Assisted Testing, Problem Solving, Physical Environment
Peer reviewed Peer reviewed
Direct linkDirect link
Setzer, J. Carl; Wise, Steven L.; van den Heuvel, Jill R.; Ling, Guangming – Applied Measurement in Education, 2013
Assessment results collected under low-stakes testing situations are subject to effects of low examinee effort. The use of computer-based testing allows researchers to develop new ways of measuring examinee effort, particularly using response times. At the item level, responses can be classified as exhibiting either rapid-guessing behavior or…
Descriptors: Testing, Guessing (Tests), Reaction Time, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Shen, Winny; Sackett, Paul R.; Kuncel, Nathan R.; Beatty, Adam S.; Rigdon, Jana L.; Kiger, Thomas B. – Applied Measurement in Education, 2012
Previous research has demonstrated that cognitive test validities are generalizable and predictive of academic performance across situations. However, even after accounting for statistical artifacts (e.g., sampling error, range restriction, criterion reliability), substantial variability often remains around estimates of cognitive test-performance…
Descriptors: College Entrance Examinations, Standardized Tests, Test Validity, Institutional Characteristics
Peer reviewed Peer reviewed
Direct linkDirect link
Sinha, Ruchi; Oswald, Frederick; Imus, Anna; Schmitt, Neal – Applied Measurement in Education, 2011
The current study examines how using a multidimensional battery of predictors (high-school grade point average (GPA), SAT/ACT, and biodata), and weighting the predictors based on the different values institutions place on various student performance dimensions (college GPA, organizational citizenship behaviors (OCBs), and behaviorally anchored…
Descriptors: Grade Point Average, Interrater Reliability, Rating Scales, College Admission
Peer reviewed Peer reviewed
Direct linkDirect link
Swerdzewski, Peter J.; Harmes, J. Christine; Finney, Sara J. – Applied Measurement in Education, 2011
Many universities rely on data gathered from tests that are low stakes for examinees but high stakes for the various programs being assessed. Given the lack of consequences associated with many collegiate assessments, the construct-irrelevant variance introduced by unmotivated students is potentially a serious threat to the validity of the…
Descriptors: Computer Assisted Testing, Student Motivation, Inferences, Universities
Peer reviewed Peer reviewed
Direct linkDirect link
Imus, Anna; Schmitt, Neal; Kim, Brian; Oswald, Frederick L.; Merritt, Stephanie; Wrestring, Alyssa Friede – Applied Measurement in Education, 2011
Investigations of differential item functioning (DIF) have been conducted mostly on ability tests and have found little evidence of easily interpretable differences across various demographic subgroups. In this study, we examined the degree to which DIF in biographical data items referencing academically relevant background, experiences, and…
Descriptors: Test Bias, Gender Differences, Racial Differences, Biographical Inventories
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ou Lydia – Applied Measurement in Education, 2011
The TOEFL[R] iBT has increased the length of each reading passage to better approximate academic reading at North American universities, resulting in a reduction in the number of passages on the reading section of the test. One of the concerns brought about by this change is whether the decrease in topic variety increases the likelihood that an…
Descriptors: Language Tests, Reading Tests, English (Second Language), Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, HeeKyoung; Kolen, Michael J. – Applied Measurement in Education, 2010
Test equating might be affected by including in the equating analyses examinees who have taken the test previously. This study evaluated the effect of including such repeaters on Medical College Admission Test (MCAT) equating using a population invariance approach. Three-parameter logistic (3-PL) item response theory (IRT) true score and…
Descriptors: Repetition, Equated Scores, College Entrance Examinations, Medical Schools
Peer reviewed Peer reviewed
Direct linkDirect link
Livingston, Samuel A.; Antal, Judit – Applied Measurement in Education, 2010
A simultaneous equating of four new test forms to each other and to one previous form was accomplished through a complex design incorporating seven separate equating links. Each new form was linked to the reference form by four different paths, and each path produced a different score conversion. The procedure used to resolve these inconsistencies…
Descriptors: Measurement Techniques, Measurement, Educational Assessment, Educational Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Allen, Jeff; Robbins, Steven B.; Sawyer, Richard – Applied Measurement in Education, 2010
Research on the validity of psychosocial factors (PSFs) and other noncognitive predictors of college outcomes has largely ignored the practical benefits implied by the validity. We summarize evidence of the validity of PSF measures as predictors of college outcomes and then explain how this validity directly translates into improved identification…
Descriptors: Institutional Research, Academic Persistence, Validity, At Risk Students
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, James J.; Yang, Tong; Chauvin, Sheila W. – Applied Measurement in Education, 2009
In some professions, speed and accuracy are as important as acquired requisite knowledge and skills. The availability of computer-based testing now facilitates examination of these two important aspects of student performance. We found that student response times in a conventional non-speeded multiple-choice test, at both the global and individual…
Descriptors: Reaction Time, Test Items, Student Reaction, Multiple Choice Tests
Previous Page | Next Page ยป
Pages: 1  |  2