Publication Date

In 2022 | 4 |

Since 2021 | 12 |

Since 2018 (last 5 years) | 40 |

Since 2013 (last 10 years) | 79 |

Since 2003 (last 20 years) | 143 |

Descriptor

Error of Measurement | 234 |

Statistical Analysis | 60 |

Item Response Theory | 58 |

Correlation | 48 |

Comparative Analysis | 42 |

Computation | 41 |

Monte Carlo Methods | 40 |

Sample Size | 39 |

Scores | 39 |

Models | 37 |

Simulation | 37 |

More ▼ |

Source

Educational and Psychological… | 234 |

Author

Marcoulides, George A. | 6 |

Raykov, Tenko | 6 |

Zumbo, Bruno D. | 6 |

Brennan, Robert L. | 5 |

DeMars, Christine E. | 5 |

Wang, Wen-Chung | 5 |

Cai, Li | 4 |

Finch, W. Holmes | 4 |

Shi, Dexin | 4 |

Zimmerman, Donald W. | 4 |

Algina, James | 3 |

More ▼ |

Publication Type

Journal Articles | 212 |

Reports - Research | 141 |

Reports - Evaluative | 55 |

Reports - Descriptive | 14 |

Speeches/Meeting Papers | 5 |

Guides - Non-Classroom | 2 |

Opinion Papers | 1 |

Education Level

Secondary Education | 6 |

Junior High Schools | 5 |

Elementary Education | 4 |

Middle Schools | 4 |

Grade 7 | 3 |

High Schools | 3 |

Higher Education | 3 |

Early Childhood Education | 2 |

Primary Education | 2 |

Adult Education | 1 |

Grade 2 | 1 |

More ▼ |

Audience

Location

Canada | 2 |

Germany | 2 |

Taiwan | 2 |

Australia | 1 |

Belgium | 1 |

Georgia | 1 |

Saudi Arabia | 1 |

South Korea | 1 |

United Kingdom (Wales) | 1 |

Laws, Policies, & Programs

Assessments and Surveys

What Works Clearinghouse Rating

Kush, Joseph M.; Konold, Timothy R.; Bradshaw, Catherine P. – Educational and Psychological Measurement, 2022

Multilevel structural equation modeling (MSEM) allows researchers to model latent factor structures at multiple levels simultaneously by decomposing within- and between-group variation. Yet the extent to which the sampling ratio (i.e., proportion of cases sampled from each group) influences the results of MSEM models remains unknown. This article…

Descriptors: Structural Equation Models, Factor Structure, Statistical Bias, Error of Measurement

Fu, Yuanshu; Wen, Zhonglin; Wang, Yang – Educational and Psychological Measurement, 2022

Composite reliability, or coefficient omega, can be estimated using structural equation modeling. Composite reliability is usually estimated under the basic independent clusters model of confirmatory factor analysis (ICM-CFA). However, due to the existence of cross-loadings, the model fit of the exploratory structural equation model (ESEM) is…

Descriptors: Comparative Analysis, Structural Equation Models, Factor Analysis, Reliability

Jiang, Zhehan; Raymond, Mark; DiStefano, Christine; Shi, Dexin; Liu, Ren; Sun, Junhua – Educational and Psychological Measurement, 2022

Computing confidence intervals around generalizability coefficients has long been a challenging task in generalizability theory. This is a serious practical problem because generalizability coefficients are often computed from designs where some facets have small sample sizes, and researchers have little guide regarding the trustworthiness of the…

Descriptors: Monte Carlo Methods, Intervals, Generalizability Theory, Error of Measurement

Cooperman, Allison W.; Weiss, David J.; Wang, Chun – Educational and Psychological Measurement, 2022

Adaptive measurement of change (AMC) is a psychometric method for measuring intra-individual change on one or more latent traits across testing occasions. Three hypothesis tests--a Z test, likelihood ratio test, and score ratio index--have demonstrated desirable statistical properties in this context, including low false positive rates and high…

Descriptors: Error of Measurement, Psychometrics, Hypothesis Testing, Simulation

Gorgun, Guher; Bulut, Okan – Educational and Psychological Measurement, 2021

In low-stakes assessments, some students may not reach the end of the test and leave some items unanswered due to various reasons (e.g., lack of test-taking motivation, poor time management, and test speededness). Not-reached items are often treated as incorrect or not-administered in the scoring process. However, when the proportion of…

Descriptors: Scoring, Test Items, Response Style (Tests), Mathematics Tests

Ellis, Jules L. – Educational and Psychological Measurement, 2021

This study develops a theoretical model for the costs of an exam as a function of its duration. Two kind of costs are distinguished: (1) the costs of measurement errors and (2) the costs of the measurement. Both costs are expressed in time of the student. Based on a classical test theory model, enriched with assumptions on the context, the costs…

Descriptors: Test Length, Models, Error of Measurement, Measurement

Montoya, Amanda K.; Edwards, Michael C. – Educational and Psychological Measurement, 2021

Model fit indices are being increasingly recommended and used to select the number of factors in an exploratory factor analysis. Growing evidence suggests that the recommended cutoff values for common model fit indices are not appropriate for use in an exploratory factor analysis context. A particularly prominent problem in scale evaluation is the…

Descriptors: Goodness of Fit, Factor Analysis, Cutting Scores, Correlation

Wang, Yan; Kim, Eunsook; Ferron, John M.; Dedrick, Robert F.; Tan, Tony X.; Stark, Stephen – Educational and Psychological Measurement, 2021

Factor mixture modeling (FMM) has been increasingly used to investigate unobserved population heterogeneity. This study examined the issue of covariate effects with FMM in the context of measurement invariance testing. Specifically, the impact of excluding and misspecifying covariate effects on measurement invariance testing and class enumeration…

Descriptors: Role, Error of Measurement, Monte Carlo Methods, Models

Park, Sung Eun; Ahn, Soyeon; Zopluoglu, Cengiz – Educational and Psychological Measurement, 2021

This study presents a new approach to synthesizing differential item functioning (DIF) effect size: First, using correlation matrices from each study, we perform a multigroup confirmatory factor analysis (MGCFA) that examines measurement invariance of a test item between two subgroups (i.e., focal and reference groups). Then we synthesize, across…

Descriptors: Item Analysis, Effect Size, Difficulty Level, Monte Carlo Methods

Pavlov, Goran; Maydeu-Olivares, Alberto; Shi, Dexin – Educational and Psychological Measurement, 2021

We examine the accuracy of p values obtained using the asymptotic mean and variance (MV) correction to the distribution of the sample standardized root mean squared residual (SRMR) proposed by Maydeu-Olivares to assess the exact fit of SEM models. In a simulation study, we found that under normality, the MV-corrected SRMR statistic provides…

Descriptors: Structural Equation Models, Goodness of Fit, Simulation, Error of Measurement

Dimitrov, Dimiter M.; Atanasov, Dimitar V. – Educational and Psychological Measurement, 2021

This study presents a latent (item response theory--like) framework of a recently developed classical approach to test scoring, equating, and item analysis, referred to as "D"-scoring method. Specifically, (a) person and item parameters are estimated under an item response function model on the "D"-scale (from 0 to 1) using…

Descriptors: Scoring, Equated Scores, Item Analysis, Item Response Theory

Ferrando, Pere J.; Navarro-González, David – Educational and Psychological Measurement, 2021

Item response theory "dual" models (DMs) in which both items and individuals are viewed as sources of differential measurement error so far have been proposed only for unidimensional measures. This article proposes two multidimensional extensions of existing DMs: the M-DTCRM (dual Thurstonian continuous response model), intended for…

Descriptors: Item Response Theory, Error of Measurement, Models, Factor Analysis

Lee, HyeSun; Smith, Weldon Z. – Educational and Psychological Measurement, 2020

Based on the framework of testlet models, the current study suggests the Bayesian random block item response theory (BRB IRT) model to fit forced-choice formats where an item block is composed of three or more items. To account for local dependence among items within a block, the BRB IRT model incorporated a random block effect into the response…

Descriptors: Bayesian Statistics, Item Response Theory, Monte Carlo Methods, Test Format

Murrah, William M. – Educational and Psychological Measurement, 2020

Multiple regression is often used to compare the importance of two or more predictors. When the predictors being compared are measured with error, the estimated coefficients can be biased and Type I error rates can be inflated. This study explores the impact of measurement error on comparing predictors when one is measured with error, followed by…

Descriptors: Error of Measurement, Statistical Bias, Multiple Regression Analysis, Predictor Variables

Ippel, Lianne; Magis, David – Educational and Psychological Measurement, 2020

In dichotomous item response theory (IRT) framework, the asymptotic standard error (ASE) is the most common statistic to evaluate the precision of various ability estimators. Easy-to-use ASE formulas are readily available; however, the accuracy of some of these formulas was recently questioned and new ASE formulas were derived from a general…

Descriptors: Item Response Theory, Error of Measurement, Accuracy, Standards