Publication Date
| In 2015 | 0 |
| Since 2014 | 0 |
| Since 2011 (last 5 years) | 4 |
| Since 2006 (last 10 years) | 4 |
| Since 1996 (last 20 years) | 4 |
Descriptor
| Item Response Theory | 3 |
| Academic Standards | 2 |
| Foreign Countries | 2 |
| Grade 4 | 2 |
| Mathematics Tests | 2 |
| Measurement | 2 |
| Test Construction | 2 |
| Test Items | 2 |
| Achievement Tests | 1 |
| Adaptive Testing | 1 |
| More ▼ | |
Source
| Educational Research and… | 4 |
Author
| Kubinger, Klaus D. | 4 |
| Reif, Manuel | 3 |
| Hohensinn, Christine | 2 |
| Khorramdel, Lale | 2 |
| Yanagida, Takuya | 2 |
| Frebort, Martina | 1 |
| Hofer, Sandra | 1 |
| Holocher-Ertl, Stefana | 1 |
| Rasch, Dieter | 1 |
| Schleicher, Eva | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 4 |
| Reports - Research | 3 |
| Reports - Descriptive | 1 |
Education Level
| Elementary Education | 2 |
| Elementary Secondary Education | 2 |
| Grade 4 | 2 |
| Intermediate Grades | 2 |
Audience
Showing all 4 results
Kubinger, Klaus D.; Rasch, Dieter; Yanagida, Takuya – Educational Research and Evaluation, 2011
Though calibration of an achievement test within psychological and educational context is very often carried out by the Rasch model, data sampling is hardly designed according to statistical foundations. However, Kubinger, Rasch, and Yanagida (2009) recently suggested an approach for the determination of sample size according to a given Type I and…
Descriptors: Sample Size, Simulation, Testing, Achievement Tests
Kubinger, Klaus D.; Reif, Manuel; Yanagida, Takuya – Educational Research and Evaluation, 2011
Item position effects provoke serious problems within adaptive testing. This is because different testees are necessarily presented with the same item at different presentation positions, as a consequence of which comparing their ability parameter estimations in the case of such effects would not at all be fair. In this article, a specific…
Descriptors: Adaptive Testing, Test Items, Item Analysis, Item Response Theory
Kubinger, Klaus D.; Hohensinn, Christine; Hofer, Sandra; Khorramdel, Lale; Frebort, Martina; Holocher-Ertl, Stefana; Reif, Manuel; Sonnleitner, Philipp – Educational Research and Evaluation, 2011
In large-scale assessments, it usually does not occur that every item of the applicable item pool is administered to every examinee. Within item response theory (IRT), in particular the Rasch model (1960), this is not really a problem because item calibration works nevertheless. The different test booklets only need to be conceptualized according…
Descriptors: Measurement, Item Response Theory, Test Construction, Academic Standards
Hohensinn, Christine; Kubinger, Klaus D.; Reif, Manuel; Schleicher, Eva; Khorramdel, Lale – Educational Research and Evaluation, 2011
For large-scale assessments, usually booklet designs administering the same item at different positions within a booklet are used. Therefore, the occurrence of position effects influencing the difficulty of the item is a crucial issue. Not taking learning or fatigue effects into account would result in a bias of estimated item difficulty. The…
Descriptors: Measurement, Test Construction, Test Items, Difficulty Level

Peer reviewed
Direct link
