NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 46 to 60 of 520 results
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Sooyeon; Walker, Michael E. – Applied Measurement in Education, 2012
This study investigated the impact of repeat takers of a licensure test on the equating functions in the context of a nonequivalent groups with anchor test (NEAT) design. Examinees who had taken a new, to-be-equated form of the test were divided into three subgroups according to their previous testing experience: (a) repeaters who previously took…
Descriptors: Equated Scores, Licensing Examinations (Professions), Repetition, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Kahraman, Nilufer; De Champlain, Andre; Raymond, Mark – Applied Measurement in Education, 2012
Item-level information, such as difficulty and discrimination are invaluable to the test assembly, equating, and scoring practices. Estimating these parameters within the context of large-scale performance assessments is often hindered by the use of unbalanced designs for assigning examinees to tasks and raters because such designs result in very…
Descriptors: Performance Based Assessment, Medicine, Factor Analysis, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Banks, Kathleen – Applied Measurement in Education, 2012
The purpose of this article is to illustrate a seven-step process for determining whether inferential reading items were more susceptible to cultural bias than literal reading items. The seven-step process was demonstrated using multiple-choice data from the reading portion of a reading/language arts test for fifth and seventh grade Hispanic,…
Descriptors: Reading Tests, Test Items, Standardized Tests, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Shen, Winny; Sackett, Paul R.; Kuncel, Nathan R.; Beatty, Adam S.; Rigdon, Jana L.; Kiger, Thomas B. – Applied Measurement in Education, 2012
Previous research has demonstrated that cognitive test validities are generalizable and predictive of academic performance across situations. However, even after accounting for statistical artifacts (e.g., sampling error, range restriction, criterion reliability), substantial variability often remains around estimates of cognitive test-performance…
Descriptors: College Entrance Examinations, Standardized Tests, Test Validity, Institutional Characteristics
Peer reviewed Peer reviewed
Direct linkDirect link
Taylor, Catherine S.; Lee, Yoonsun – Applied Measurement in Education, 2012
This was a study of differential item functioning (DIF) for grades 4, 7, and 10 reading and mathematics items from state criterion-referenced tests. The tests were composed of multiple-choice and constructed-response items. Gender DIF was investigated using POLYSIBTEST and a Rasch procedure. The Rasch procedure flagged more items for DIF than did…
Descriptors: Test Bias, Gender Differences, Reading Tests, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Cho, Hyun-Jeong; Lee, Jaehoon; Kingston, Neal – Applied Measurement in Education, 2012
This study examined the validity of test accommodation in third-eighth graders using differential item functioning (DIF) and mixture IRT models. Two data sets were used for these analyses. With the first data set (N = 51,591) we examined whether item type (i.e., story, explanation, straightforward) or item features were associated with item…
Descriptors: Testing Accommodations, Test Bias, Item Response Theory, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Wolf, Mikyung Kim; Kim, Jinok; Kao, Jenny – Applied Measurement in Education, 2012
Glossary and reading aloud test items are commonly allowed in many states' accommodation policies for English language learner (ELL) students for large-scale mathematics assessments. However, little research is available regarding the effects of these accommodations on ELL students' performance. Further, no research exists that examines how…
Descriptors: Testing Accommodations, Glossaries, Reading Aloud to Others, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Ho, Tsung-Han; Dodd, Barbara G. – Applied Measurement in Education, 2012
In this study we compared five item selection procedures using three ability estimation methods in the context of a mixed-format adaptive test based on the generalized partial credit model. The item selection procedures used were maximum posterior weighted information, maximum expected information, maximum posterior weighted Kullback-Leibler…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Hongli; Suen, Hoi K. – Applied Measurement in Education, 2012
A meta-analysis using Hierarchical Linear Modeling (HLM) was conducted to examine the effects of test accommodations on the test performance of English language learners (ELLs). The results indicated that test accommodations improve ELLs' test performance by about 0.157 standard deviations--a relatively small but statistically significant…
Descriptors: Testing Accommodations, English (Second Language), Second Language Learning, Limited English Speaking
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Sooyeon; von Davier, Alina A.; Haberman, Shelby – Applied Measurement in Education, 2011
The synthetic function is a weighted average of the identity (the linking function for forms that are known to be completely parallel) and a traditional equating method. The purpose of the present study was to investigate the benefits of the synthetic function on small-sample equating using various real data sets gathered from different…
Descriptors: Testing Programs, Equated Scores, Investigations, Data Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Sinha, Ruchi; Oswald, Frederick; Imus, Anna; Schmitt, Neal – Applied Measurement in Education, 2011
The current study examines how using a multidimensional battery of predictors (high-school grade point average (GPA), SAT/ACT, and biodata), and weighting the predictors based on the different values institutions place on various student performance dimensions (college GPA, organizational citizenship behaviors (OCBs), and behaviorally anchored…
Descriptors: Grade Point Average, Interrater Reliability, Rating Scales, College Admission
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Hee-Sun; Liu, Ou Lydia; Linn, Marcia C. – Applied Measurement in Education, 2011
This study explores measurement of a construct called knowledge integration in science using multiple-choice and explanation items. We use construct and instructional validity evidence to examine the role multiple-choice and explanation items plays in measuring students' knowledge integration ability. For construct validity, we analyze item…
Descriptors: Knowledge Level, Construct Validity, Validity, Scaffolding (Teaching Technique)
Peer reviewed Peer reviewed
Direct linkDirect link
Swerdzewski, Peter J.; Harmes, J. Christine; Finney, Sara J. – Applied Measurement in Education, 2011
Many universities rely on data gathered from tests that are low stakes for examinees but high stakes for the various programs being assessed. Given the lack of consequences associated with many collegiate assessments, the construct-irrelevant variance introduced by unmotivated students is potentially a serious threat to the validity of the…
Descriptors: Computer Assisted Testing, Student Motivation, Inferences, Universities
Peer reviewed Peer reviewed
Direct linkDirect link
Imus, Anna; Schmitt, Neal; Kim, Brian; Oswald, Frederick L.; Merritt, Stephanie; Wrestring, Alyssa Friede – Applied Measurement in Education, 2011
Investigations of differential item functioning (DIF) have been conducted mostly on ability tests and have found little evidence of easily interpretable differences across various demographic subgroups. In this study, we examined the degree to which DIF in biographical data items referencing academically relevant background, experiences, and…
Descriptors: Test Bias, Gender Differences, Racial Differences, Biographical Inventories
Peer reviewed Peer reviewed
Direct linkDirect link
Engelhard, George, Jr.; Fincher, Melissa; Domaleski, Christopher S. – Applied Measurement in Education, 2011
This study examines the effects of two test administration accommodations on the mathematics performance of students within the context of a large-scale statewide assessment. The two test administration accommodations were resource guides and calculators. A stratified random sample of schools was selected to represent the demographic…
Descriptors: Testing Accommodations, Disabilities, High Stakes Tests, Program Effectiveness
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  35