Publication Date
| In 2015 | 1 |
| Since 2014 | 3 |
| Since 2011 (last 5 years) | 6 |
| Since 2006 (last 10 years) | 7 |
| Since 1996 (last 20 years) | 7 |
Descriptor
| Grade 4 | 6 |
| Comparative Analysis | 4 |
| Elementary School Students | 4 |
| Foreign Countries | 4 |
| Item Response Theory | 4 |
| Test Items | 4 |
| Models | 3 |
| Test Bias | 3 |
| Accuracy | 2 |
| Measurement | 2 |
| More ▼ | |
Source
| International Journal of… | 7 |
Author
| Ercikan, Kadriye | 2 |
| Zumbo, Bruno D. | 2 |
| De Boeck, Paul | 1 |
| DeMars, Christine E. | 1 |
| Heil, Martin | 1 |
| Jansen, Petra | 1 |
| Janssen, Rianne | 1 |
| Kahraman, Nilufer | 1 |
| Lawless, René | 1 |
| Lee, Young-Sun | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 7 |
| Reports - Research | 6 |
| Reports - Descriptive | 1 |
Education Level
| Elementary Education | 7 |
| Grade 4 | 7 |
| Intermediate Grades | 4 |
| Grade 3 | 2 |
| Early Childhood Education | 1 |
| Grade 2 | 1 |
| Grade 5 | 1 |
| Higher Education | 1 |
| Middle Schools | 1 |
| Postsecondary Education | 1 |
| More ▼ | |
Audience
Showing all 7 results
Wei, Hua; Lin, Jie – International Journal of Testing, 2015
Out-of-level testing refers to the practice of assessing a student with a test that is intended for students at a higher or lower grade level. Although the appropriateness of out-of-level testing for accountability purposes has been questioned by educators and policymakers, incorporating out-of-level items in formative assessments for accurate…
Descriptors: Test Items, Computer Assisted Testing, Adaptive Testing, Instructional Program Divisions
Oliveri, María Elena; Ercikan, Kadriye; Zumbo, Bruno D.; Lawless, René – International Journal of Testing, 2014
In this study, we contrast results from two differential item functioning (DIF) approaches (manifest and latent class) by the number of items and sources of items identified as DIF using data from an international reading assessment. The latter approach yielded three latent classes, presenting evidence of heterogeneity in examinee response…
Descriptors: Test Bias, Comparative Analysis, Reading Tests, Effect Size
Quaiser-Pohl, Claudia; Neuburger, Sarah; Heil, Martin; Jansen, Petra; Schmelter, Andrea – International Journal of Testing, 2014
This article presents a reanalysis of the data of 862 second and fourth graders collected in two previous studies, focusing on the influence of method (psychometric vs. chronometric) and stimulus type on the gender difference in mental-rotation accuracy. The children had to solve mental-rotation tasks with animal pictures, letters, or cube…
Descriptors: Foreign Countries, Gender Differences, Accuracy, Age Differences
DeMars, Christine E. – International Journal of Testing, 2013
This tutorial addresses possible sources of confusion in interpreting trait scores from the bifactor model. The bifactor model may be used when subscores are desired, either for formative feedback on an achievement test or for theoretically different constructs on a psychological test. The bifactor model is often chosen because it requires fewer…
Descriptors: Test Interpretation, Scores, Models, Correlation
Sandilands, Debra; Oliveri, Maria Elena; Zumbo, Bruno D.; Ercikan, Kadriye – International Journal of Testing, 2013
International large-scale assessments of achievement often have a large degree of differential item functioning (DIF) between countries, which can threaten score equivalence and reduce the validity of inferences based on comparisons of group performances. It is important to understand potential sources of DIF to improve the validity of future…
Descriptors: Validity, Measures (Individuals), International Studies, Foreign Countries
Lee, Young-Sun; Park, Yoon Soo; Taylan, Didem – International Journal of Testing, 2011
Studies of international mathematics achievement such as the Trends in Mathematics and Science Study (TIMSS) have employed classical test theory and item response theory to rank individuals within a latent ability continuum. Although these approaches have provided insights into comparisons between countries, they have yet to examine how specific…
Descriptors: Mathematics Achievement, Achievement Tests, Models, Cognitive Measurement
Kahraman, Nilufer; De Boeck, Paul; Janssen, Rianne – International Journal of Testing, 2009
This study introduces an approach for modeling multidimensional response data with construct-relevant group and domain factors. The item level parameter estimation process is extended to incorporate the refined effects of test dimension and group factors. Differences in item performances over groups are evaluated, distinguishing two levels of…
Descriptors: Test Bias, Test Items, Groups, Interaction

Peer reviewed
Direct link
