Publication Date

In 2021 | 6 |

Since 2020 | 14 |

Since 2017 (last 5 years) | 45 |

Since 2012 (last 10 years) | 82 |

Since 2002 (last 20 years) | 139 |

Descriptor

Error of Measurement | 228 |

Statistical Analysis | 60 |

Item Response Theory | 57 |

Correlation | 47 |

Comparative Analysis | 41 |

Computation | 39 |

Monte Carlo Methods | 39 |

Scores | 39 |

Sample Size | 38 |

Models | 36 |

Simulation | 35 |

More ▼ |

Source

Educational and Psychological… | 228 |

Author

Marcoulides, George A. | 6 |

Raykov, Tenko | 6 |

Zumbo, Bruno D. | 6 |

Brennan, Robert L. | 5 |

DeMars, Christine E. | 5 |

Wang, Wen-Chung | 5 |

Cai, Li | 4 |

Finch, W. Holmes | 4 |

Zimmerman, Donald W. | 4 |

Algina, James | 3 |

Cureton, Edward E. | 3 |

More ▼ |

Publication Type

Journal Articles | 206 |

Reports - Research | 135 |

Reports - Evaluative | 55 |

Reports - Descriptive | 14 |

Speeches/Meeting Papers | 5 |

Guides - Non-Classroom | 2 |

Opinion Papers | 1 |

Education Level

Secondary Education | 6 |

Junior High Schools | 5 |

Middle Schools | 4 |

Elementary Education | 3 |

Grade 7 | 3 |

High Schools | 3 |

Higher Education | 3 |

Adult Education | 1 |

Early Childhood Education | 1 |

Grade 4 | 1 |

Grade 5 | 1 |

More ▼ |

Audience

Location

Canada | 2 |

Germany | 2 |

Taiwan | 2 |

Australia | 1 |

Belgium | 1 |

Georgia | 1 |

Saudi Arabia | 1 |

South Korea | 1 |

United Kingdom (Wales) | 1 |

Laws, Policies, & Programs

Assessments and Surveys

What Works Clearinghouse Rating

Wang, Yan; Kim, Eunsook; Ferron, John M.; Dedrick, Robert F.; Tan, Tony X.; Stark, Stephen – Educational and Psychological Measurement, 2021

Factor mixture modeling (FMM) has been increasingly used to investigate unobserved population heterogeneity. This study examined the issue of covariate effects with FMM in the context of measurement invariance testing. Specifically, the impact of excluding and misspecifying covariate effects on measurement invariance testing and class enumeration…

Descriptors: Role, Error of Measurement, Monte Carlo Methods, Models

Park, Sung Eun; Ahn, Soyeon; Zopluoglu, Cengiz – Educational and Psychological Measurement, 2021

This study presents a new approach to synthesizing differential item functioning (DIF) effect size: First, using correlation matrices from each study, we perform a multigroup confirmatory factor analysis (MGCFA) that examines measurement invariance of a test item between two subgroups (i.e., focal and reference groups). Then we synthesize, across…

Descriptors: Item Analysis, Effect Size, Difficulty Level, Monte Carlo Methods

Pavlov, Goran; Maydeu-Olivares, Alberto; Shi, Dexin – Educational and Psychological Measurement, 2021

We examine the accuracy of p values obtained using the asymptotic mean and variance (MV) correction to the distribution of the sample standardized root mean squared residual (SRMR) proposed by Maydeu-Olivares to assess the exact fit of SEM models. In a simulation study, we found that under normality, the MV-corrected SRMR statistic provides…

Descriptors: Structural Equation Models, Goodness of Fit, Simulation, Error of Measurement

Dimitrov, Dimiter M.; Atanasov, Dimitar V. – Educational and Psychological Measurement, 2021

This study presents a latent (item response theory--like) framework of a recently developed classical approach to test scoring, equating, and item analysis, referred to as "D"-scoring method. Specifically, (a) person and item parameters are estimated under an item response function model on the "D"-scale (from 0 to 1) using…

Descriptors: Scoring, Equated Scores, Item Analysis, Item Response Theory

Ellis, Jules L. – Educational and Psychological Measurement, 2021

This study develops a theoretical model for the costs of an exam as a function of its duration. Two kind of costs are distinguished: (1) the costs of measurement errors and (2) the costs of the measurement. Both costs are expressed in time of the student. Based on a classical test theory model, enriched with assumptions on the context, the costs…

Descriptors: Test Length, Models, Error of Measurement, Measurement

Montoya, Amanda K.; Edwards, Michael C. – Educational and Psychological Measurement, 2021

Model fit indices are being increasingly recommended and used to select the number of factors in an exploratory factor analysis. Growing evidence suggests that the recommended cutoff values for common model fit indices are not appropriate for use in an exploratory factor analysis context. A particularly prominent problem in scale evaluation is the…

Descriptors: Goodness of Fit, Factor Analysis, Cutting Scores, Correlation

Yesiltas, Gonca; Paek, Insu – Educational and Psychological Measurement, 2020

A log-linear model (LLM) is a well-known statistical method to examine the relationship among categorical variables. This study investigated the performance of LLM in detecting differential item functioning (DIF) for polytomously scored items via simulations where various sample sizes, ability mean differences (impact), and DIF types were…

Descriptors: Simulation, Sample Size, Item Analysis, Scores

Dimitrov, Dimiter M. – Educational and Psychological Measurement, 2020

This study presents new models for item response functions (IRFs) in the framework of the D-scoring method (DSM) that is gaining attention in the field of educational and psychological measurement and largescale assessments. In a previous work on DSM, the IRFs of binary items were estimated using a logistic regression model (LRM). However, the LRM…

Descriptors: Item Response Theory, Scoring, True Scores, Scaling

Finch, W. Holmes – Educational and Psychological Measurement, 2020

Exploratory factor analysis (EFA) is widely used by researchers in the social sciences to characterize the latent structure underlying a set of observed indicator variables. One of the primary issues that must be resolved when conducting an EFA is determination of the number of factors to retain. There exist a large number of statistical tools…

Descriptors: Factor Analysis, Goodness of Fit, Social Sciences, Comparative Analysis

Lee, HyeSun; Smith, Weldon Z. – Educational and Psychological Measurement, 2020

Based on the framework of testlet models, the current study suggests the Bayesian random block item response theory (BRB IRT) model to fit forced-choice formats where an item block is composed of three or more items. To account for local dependence among items within a block, the BRB IRT model incorporated a random block effect into the response…

Descriptors: Bayesian Statistics, Item Response Theory, Monte Carlo Methods, Test Format

Murrah, William M. – Educational and Psychological Measurement, 2020

Multiple regression is often used to compare the importance of two or more predictors. When the predictors being compared are measured with error, the estimated coefficients can be biased and Type I error rates can be inflated. This study explores the impact of measurement error on comparing predictors when one is measured with error, followed by…

Descriptors: Error of Measurement, Statistical Bias, Multiple Regression Analysis, Predictor Variables

Ippel, Lianne; Magis, David – Educational and Psychological Measurement, 2020

In dichotomous item response theory (IRT) framework, the asymptotic standard error (ASE) is the most common statistic to evaluate the precision of various ability estimators. Easy-to-use ASE formulas are readily available; however, the accuracy of some of these formulas was recently questioned and new ASE formulas were derived from a general…

Descriptors: Item Response Theory, Error of Measurement, Accuracy, Standards

Shi, Dexin; Maydeu-Olivares, Alberto – Educational and Psychological Measurement, 2020

We examined the effect of estimation methods, maximum likelihood (ML), unweighted least squares (ULS), and diagonally weighted least squares (DWLS), on three population SEM (structural equation modeling) fit indices: the root mean square error of approximation (RMSEA), the comparative fit index (CFI), and the standardized root mean square residual…

Descriptors: Structural Equation Models, Computation, Maximum Likelihood Statistics, Least Squares Statistics

Sideridis, Georgios D.; Tsaousis, Ioannis; Alamri, Abeer A. – Educational and Psychological Measurement, 2020

The main thesis of the present study is to use the Bayesian structural equation modeling (BSEM) methodology of establishing approximate measurement invariance (A-MI) using data from a national examination in Saudi Arabia as an alternative to not meeting strong invariance criteria. Instead, we illustrate how to account for the absence of…

Descriptors: Bayesian Statistics, Structural Equation Models, Foreign Countries, Error of Measurement

Schweizer, Karl; Reiß, Siegbert; Troche, Stefan – Educational and Psychological Measurement, 2019

The article reports three simulation studies conducted to find out whether the effect of a time limit for testing impairs model fit in investigations of structural validity, whether the representation of the assumed source of the effect prevents impairment of model fit and whether it is possible to identify and discriminate this method effect from…

Descriptors: Timed Tests, Testing, Barriers, Testing Problems