Publication Date
In 2024 | 0 |
Since 2023 | 4 |
Since 2020 (last 5 years) | 8 |
Since 2015 (last 10 years) | 8 |
Since 2005 (last 20 years) | 8 |
Descriptor
Source
Educational and Psychological… | 8 |
Author
Rios, Joseph A. | 2 |
Al Otaiba, Stephanie | 1 |
Bolin, Jocelyn H. | 1 |
Bolt, Daniel M. | 1 |
Fernández, Daniel | 1 |
Finch, W. Holmes | 1 |
Gu, Peter Yongqi | 1 |
Hart, Sara A. | 1 |
Harvey, Samuel | 1 |
Hau, Kit-Tai | 1 |
Khorramdel, Lale | 1 |
More ▼ |
Publication Type
Journal Articles | 8 |
Reports - Research | 7 |
Reports - Evaluative | 1 |
Education Level
Secondary Education | 4 |
Elementary Secondary Education | 2 |
Early Childhood Education | 1 |
Elementary Education | 1 |
Grade 1 | 1 |
Grade 2 | 1 |
Grade 3 | 1 |
Kindergarten | 1 |
Primary Education | 1 |
Audience
Location
Australia | 1 |
China | 1 |
New Zealand | 1 |
United Arab Emirates | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 4 |
Trends in International… | 2 |
Measures of Academic Progress | 1 |
Woodcock Johnson Tests of… | 1 |
What Works Clearinghouse Rating
Mangino, Anthony A.; Bolin, Jocelyn H.; Finch, W. Holmes – Educational and Psychological Measurement, 2023
This study seeks to compare fixed and mixed effects models for the purposes of predictive classification in the presence of multilevel data. The first part of the study utilizes a Monte Carlo simulation to compare fixed and mixed effects logistic regression and random forests. An applied examination of the prediction of student retention in the…
Descriptors: Prediction, Classification, Monte Carlo Methods, Foreign Countries
Liu, Ivy; Suesse, Thomas; Harvey, Samuel; Gu, Peter Yongqi; Fernández, Daniel; Randal, John – Educational and Psychological Measurement, 2023
The Mantel-Haenszel estimator is one of the most popular techniques for measuring differential item functioning (DIF). A generalization of this estimator is applied to the context of DIF to compare items by taking the covariance of odds ratio estimators between dependent items into account. Unlike the Item Response Theory, the method does not rely…
Descriptors: Test Bias, Computation, Statistical Analysis, Achievement Tests
Xiao, Leifeng; Hau, Kit-Tai – Educational and Psychological Measurement, 2023
We examined the performance of coefficient alpha and its potential competitors (ordinal alpha, omega total, Revelle's omega total [omega RT], omega hierarchical [omega h], greatest lower bound [GLB], and coefficient "H") with continuous and discrete data having different types of non-normality. Results showed the estimation bias was…
Descriptors: Statistical Bias, Statistical Analysis, Likert Scales, Statistical Distributions
von Davier, Matthias; Tyack, Lillian; Khorramdel, Lale – Educational and Psychological Measurement, 2023
Automated scoring of free drawings or images as responses has yet to be used in large-scale assessments of student achievement. In this study, we propose artificial neural networks to classify these types of graphical responses from a TIMSS 2019 item. We are comparing classification accuracy of convolutional and feed-forward approaches. Our…
Descriptors: Scoring, Networks, Artificial Intelligence, Elementary Secondary Education
van Dijk, Wilhelmina; Schatschneider, Christopher; Al Otaiba, Stephanie; Hart, Sara A. – Educational and Psychological Measurement, 2022
Complex research questions often need large samples to obtain accurate estimates of parameters and adequate power. Combining extant data sets into a large, pooled data set is one way this can be accomplished without expending resources. Measurement invariance (MI) modeling is an established approach to ensure participant scores are on the same…
Descriptors: Sample Size, Data Analysis, Goodness of Fit, Measurement
Kim, Nana; Bolt, Daniel M. – Educational and Psychological Measurement, 2021
This paper presents a mixture item response tree (IRTree) model for extreme response style. Unlike traditional applications of single IRTree models, a mixture approach provides a way of representing the mixture of respondents following different underlying response processes (between individuals), as well as the uncertainty present at the…
Descriptors: Item Response Theory, Response Style (Tests), Models, Test Items
Rios, Joseph A. – Educational and Psychological Measurement, 2021
Low test-taking effort as a validity threat is common when examinees perceive an assessment context to have minimal personal value. Prior research has shown that in such contexts, subgroups may differ in their effort, which raises two concerns when making subgroup mean comparisons. First, it is unclear how differential effort could influence…
Descriptors: Response Style (Tests), Statistical Analysis, Measurement, Comparative Analysis
Rios, Joseph A.; Soland, James – Educational and Psychological Measurement, 2021
As low-stakes testing contexts increase, low test-taking effort may serve as a serious validity threat. One common solution to this problem is to identify noneffortful responses and treat them as missing during parameter estimation via the effort-moderated item response theory (EM-IRT) model. Although this model has been shown to outperform…
Descriptors: Computation, Accuracy, Item Response Theory, Response Style (Tests)