NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing all 12 results
Peer reviewed Peer reviewed
Direct linkDirect link
Wei, Hua; Lin, Jie – International Journal of Testing, 2015
Out-of-level testing refers to the practice of assessing a student with a test that is intended for students at a higher or lower grade level. Although the appropriateness of out-of-level testing for accountability purposes has been questioned by educators and policymakers, incorporating out-of-level items in formative assessments for accurate…
Descriptors: Test Items, Computer Assisted Testing, Adaptive Testing, Instructional Program Divisions
Peer reviewed Peer reviewed
Direct linkDirect link
Oliveri, María Elena; Ercikan, Kadriye; Zumbo, Bruno D.; Lawless, René – International Journal of Testing, 2014
In this study, we contrast results from two differential item functioning (DIF) approaches (manifest and latent class) by the number of items and sources of items identified as DIF using data from an international reading assessment. The latter approach yielded three latent classes, presenting evidence of heterogeneity in examinee response…
Descriptors: Test Bias, Comparative Analysis, Reading Tests, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Kan, Adnan; Bulut, Okan – International Journal of Testing, 2014
This study investigated whether the linguistic complexity of items leads to gender differential item functioning (DIF) on mathematics assessments. Two forms of a mathematics test were developed. The first form consisted of algebra items based on mathematical expressions, terms, and equations. In the second form, the same items were written as word…
Descriptors: Gender Differences, Test Bias, Difficulty Level, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Quaiser-Pohl, Claudia; Neuburger, Sarah; Heil, Martin; Jansen, Petra; Schmelter, Andrea – International Journal of Testing, 2014
This article presents a reanalysis of the data of 862 second and fourth graders collected in two previous studies, focusing on the influence of method (psychometric vs. chronometric) and stimulus type on the gender difference in mental-rotation accuracy. The children had to solve mental-rotation tasks with animal pictures, letters, or cube…
Descriptors: Foreign Countries, Gender Differences, Accuracy, Age Differences
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – International Journal of Testing, 2013
This tutorial addresses possible sources of confusion in interpreting trait scores from the bifactor model. The bifactor model may be used when subscores are desired, either for formative feedback on an achievement test or for theoretically different constructs on a psychological test. The bifactor model is often chosen because it requires fewer…
Descriptors: Test Interpretation, Scores, Models, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Sandilands, Debra; Oliveri, Maria Elena; Zumbo, Bruno D.; Ercikan, Kadriye – International Journal of Testing, 2013
International large-scale assessments of achievement often have a large degree of differential item functioning (DIF) between countries, which can threaten score equivalence and reduce the validity of inferences based on comparisons of group performances. It is important to understand potential sources of DIF to improve the validity of future…
Descriptors: Validity, Measures (Individuals), International Studies, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Ferdous, Abdullah A.; Buckendahl, Chad W. – International Journal of Testing, 2013
Considerable research about standard setting has revolved around a U.S.-centric policy context. That is, over the past decade, conclusions about thought processes and the interaction of education policy and panelists' judgments have been based on assumptions of comparable policy settings. However, whether these assumptions generalize to other…
Descriptors: Standard Setting (Scoring), Cognitive Processes, Mathematics Tests, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Young-Sun; Park, Yoon Soo; Taylan, Didem – International Journal of Testing, 2011
Studies of international mathematics achievement such as the Trends in Mathematics and Science Study (TIMSS) have employed classical test theory and item response theory to rank individuals within a latent ability continuum. Although these approaches have provided insights into comparisons between countries, they have yet to examine how specific…
Descriptors: Mathematics Achievement, Achievement Tests, Models, Cognitive Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Ong, Yoke Mooi; Williams, Julian Scott; Lamprianou, Iasonas – International Journal of Testing, 2011
The aims of this study are (a) to examine the sources of differential functioning by gender via differential bundle functioning (DBF) in mathematics assessment and (b) to use DBF to explore whether the differential functioning displayed is construct-relevant or construct-irrelevant. Three qualitatively different areas, namely curriculum domains,…
Descriptors: Test Bias, Gender Differences, Gender Bias, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Chulu, Bob Wajizigha; Sireci, Stephen G. – International Journal of Testing, 2011
Many examination agencies, policy makers, media houses, and the public at large make high-stakes decisions based on test scores. Unfortunately, in some cases educational tests are not statistically equated to account for test differences over time, which leads to inappropriate interpretations of students' performance. In this study we illustrate…
Descriptors: Classification, Foreign Countries, Item Response Theory, High Stakes Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Gierl, Mark J.; Alves, Cecilia; Majeau, Renate Taylor – International Journal of Testing, 2010
The purpose of this study is to apply the attribute hierarchy method in an operational diagnostic mathematics program at Grades 3 and 6 to promote cognitive inferences about students' problem-solving skills. The attribute hierarchy method is a psychometric procedure for classifying examinees' test item responses into a set of structured attribute…
Descriptors: Test Items, Student Reaction, Diagnostic Tests, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Kahraman, Nilufer; De Boeck, Paul; Janssen, Rianne – International Journal of Testing, 2009
This study introduces an approach for modeling multidimensional response data with construct-relevant group and domain factors. The item level parameter estimation process is extended to incorporate the refined effects of test dimension and group factors. Differences in item performances over groups are evaluated, distinguishing two levels of…
Descriptors: Test Bias, Test Items, Groups, Interaction