Publication Date
In 2024 | 0 |
Since 2023 | 1 |
Since 2020 (last 5 years) | 6 |
Since 2015 (last 10 years) | 6 |
Since 2005 (last 20 years) | 6 |
Descriptor
Item Response Theory | 5 |
Response Style (Tests) | 4 |
Statistical Analysis | 3 |
Test Items | 3 |
Comparative Analysis | 2 |
Foreign Countries | 2 |
Likert Scales | 2 |
Ability | 1 |
Accuracy | 1 |
Achievement Tests | 1 |
Artificial Intelligence | 1 |
More ▼ |
Source
Educational and Psychological… | 6 |
Author
Ames, Allison J. | 1 |
Bandalos, Deborah L. | 1 |
Bulut, Okan | 1 |
Debelak, Rudolf | 1 |
Gnambs, Timo | 1 |
Henninger, Mirka | 1 |
Huang, Hung-Yu | 1 |
Leventhal, Brian C. | 1 |
Schmidt, Christoph | 1 |
Schroeders, Ulrich | 1 |
Spratto, Elisabeth M. | 1 |
More ▼ |
Publication Type
Journal Articles | 6 |
Reports - Research | 6 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Secondary Education | 1 |
Audience
Location
Germany | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 1 |
What Works Clearinghouse Rating
Henninger, Mirka; Debelak, Rudolf; Strobl, Carolin – Educational and Psychological Measurement, 2023
To detect differential item functioning (DIF), Rasch trees search for optimal split-points in covariates and identify subgroups of respondents in a data-driven way. To determine whether and in which covariate a split should be performed, Rasch trees use statistical significance tests. Consequently, Rasch trees are more likely to label small DIF…
Descriptors: Item Response Theory, Test Items, Effect Size, Statistical Significance
Ames, Allison J. – Educational and Psychological Measurement, 2022
Individual response style behaviors, unrelated to the latent trait of interest, may influence responses to ordinal survey items. Response style can introduce bias in the total score with respect to the trait of interest, threatening valid interpretation of scores. Despite claims of response style stability across scales, there has been little…
Descriptors: Response Style (Tests), Individual Differences, Scores, Test Items
Spratto, Elisabeth M.; Leventhal, Brian C.; Bandalos, Deborah L. – Educational and Psychological Measurement, 2021
In this study, we examined the results and interpretations produced from two different IRTree models--one using paths consisting of only dichotomous decisions, and one using paths consisting of both dichotomous and polytomous decisions. We used data from two versions of an impulsivity measure. In the first version, all the response options had…
Descriptors: Comparative Analysis, Item Response Theory, Decision Making, Data Analysis
Schroeders, Ulrich; Schmidt, Christoph; Gnambs, Timo – Educational and Psychological Measurement, 2022
Careless responding is a bias in survey responses that disregards the actual item content, constituting a threat to the factor structure, reliability, and validity of psychological measurements. Different approaches have been proposed to detect aberrant responses such as probing questions that directly assess test-taking behavior (e.g., bogus…
Descriptors: Response Style (Tests), Surveys, Artificial Intelligence, Identification
Huang, Hung-Yu – Educational and Psychological Measurement, 2020
In educational assessments and achievement tests, test developers and administrators commonly assume that test-takers attempt all test items with full effort and leave no blank responses with unplanned missing values. However, aberrant response behavior--such as performance decline, dropping out beyond a certain point, and skipping certain items…
Descriptors: Item Response Theory, Response Style (Tests), Test Items, Statistical Analysis
Xiao, Jiaying; Bulut, Okan – Educational and Psychological Measurement, 2020
Large amounts of missing data could distort item parameter estimation and lead to biased ability estimates in educational assessments. Therefore, missing responses should be handled properly before estimating any parameters. In this study, two Monte Carlo simulation studies were conducted to compare the performance of four methods in handling…
Descriptors: Data, Computation, Ability, Maximum Likelihood Statistics