NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 47 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
James Ohisei Uanhoro – Educational and Psychological Measurement, 2024
Accounting for model misspecification in Bayesian structural equation models is an active area of research. We present a uniquely Bayesian approach to misspecification that models the degree of misspecification as a parameter--a parameter akin to the correlation root mean squared residual. The misspecification parameter can be interpreted on its…
Descriptors: Bayesian Statistics, Structural Equation Models, Simulation, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
André Beauducel; Norbert Hilger; Tobias Kuhl – Educational and Psychological Measurement, 2024
Regression factor score predictors have the maximum factor score determinacy, that is, the maximum correlation with the corresponding factor, but they do not have the same inter-correlations as the factors. As it might be useful to compute factor score predictors that have the same inter-correlations as the factors, correlation-preserving factor…
Descriptors: Scores, Factor Analysis, Correlation, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hung-Yu – Educational and Psychological Measurement, 2023
The forced-choice (FC) item formats used for noncognitive tests typically develop a set of response options that measure different traits and instruct respondents to make judgments among these options in terms of their preference to control the response biases that are commonly observed in normative tests. Diagnostic classification models (DCMs)…
Descriptors: Test Items, Classification, Bayesian Statistics, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Fu, Yuanshu; Wen, Zhonglin; Wang, Yang – Educational and Psychological Measurement, 2022
Composite reliability, or coefficient omega, can be estimated using structural equation modeling. Composite reliability is usually estimated under the basic independent clusters model of confirmatory factor analysis (ICM-CFA). However, due to the existence of cross-loadings, the model fit of the exploratory structural equation model (ESEM) is…
Descriptors: Comparative Analysis, Structural Equation Models, Factor Analysis, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Koziol, Natalie A.; Goodrich, J. Marc; Yoon, HyeonJin – Educational and Psychological Measurement, 2022
Differential item functioning (DIF) is often used to examine validity evidence of alternate form test accommodations. Unfortunately, traditional approaches for evaluating DIF are prone to selection bias. This article proposes a novel DIF framework that capitalizes on regression discontinuity design analysis to control for selection bias. A…
Descriptors: Regression (Statistics), Item Analysis, Validity, Testing Accommodations
Peer reviewed Peer reviewed
Direct linkDirect link
Zhan, Peida – Educational and Psychological Measurement, 2020
Timely diagnostic feedback is helpful for students and teachers, enabling them to adjust their learning and teaching plans according to a current diagnosis. Motivated by the practical concern that the simultaneity estimation strategy currently adopted by longitudinal learning diagnosis models does not provide timely diagnostic feedback, this study…
Descriptors: Markov Processes, Formative Evaluation, Evaluation Methods, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Feuerstahler, Leah M.; Waller, Niels; MacDonald, Angus, III – Educational and Psychological Measurement, 2020
Although item response models have grown in popularity in many areas of educational and psychological assessment, there are relatively few applications of these models in experimental psychopathology. In this article, we explore the use of item response models in the context of a computerized cognitive task designed to assess visual working memory…
Descriptors: Item Response Theory, Psychopathology, Intelligence Tests, Psychological Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Yesiltas, Gonca; Paek, Insu – Educational and Psychological Measurement, 2020
A log-linear model (LLM) is a well-known statistical method to examine the relationship among categorical variables. This study investigated the performance of LLM in detecting differential item functioning (DIF) for polytomously scored items via simulations where various sample sizes, ability mean differences (impact), and DIF types were…
Descriptors: Simulation, Sample Size, Item Analysis, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Cain, Meghan K.; Zhang, Zhiyong; Bergeman, C. S. – Educational and Psychological Measurement, 2018
This article serves as a practical guide to mediation design and analysis by evaluating the ability of mediation models to detect a significant mediation effect using limited data. The cross-sectional mediation model, which has been shown to be biased when the mediation is happening over time, is compared with longitudinal mediation models:…
Descriptors: Mediation Theory, Case Studies, Longitudinal Studies, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Lamprianou, Iasonas – Educational and Psychological Measurement, 2018
It is common practice for assessment programs to organize qualifying sessions during which the raters (often known as "markers" or "judges") demonstrate their consistency before operational rating commences. Because of the high-stakes nature of many rating activities, the research community tends to continuously explore new…
Descriptors: Social Networks, Network Analysis, Comparative Analysis, Innovation
Peer reviewed Peer reviewed
Direct linkDirect link
Nugent, William R. – Educational and Psychological Measurement, 2017
Meta-analysis is a significant methodological advance that is increasingly important in research synthesis. Fundamental to meta-analysis is the presumption that effect sizes, such as the standardized mean difference (SMD), based on scores from different measures are comparable. It has been argued that population observed score SMDs based on scores…
Descriptors: Meta Analysis, Effect Size, Comparative Analysis, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Ming; Harring, Jeffrey R. – Educational and Psychological Measurement, 2017
Researchers continue to be interested in efficient, accurate methods of estimating coefficients of covariates in mixture modeling. Including covariates related to the latent class analysis not only may improve the ability of the mixture model to clearly differentiate between subjects but also makes interpretation of latent group membership more…
Descriptors: Simulation, Comparative Analysis, Monte Carlo Methods, Guidelines
Peer reviewed Peer reviewed
Direct linkDirect link
Zeller, Florian; Krampen, Dorothea; Reiß, Siegbert; Schweizer, Karl – Educational and Psychological Measurement, 2017
The item-position effect describes how an item's position within a test, that is, the number of previous completed items, affects the response to this item. Previously, this effect was represented by constraints reflecting simple courses, for example, a linear increase. Due to the inflexibility of these representations our aim was to examine…
Descriptors: Goodness of Fit, Simulation, Factor Analysis, Intelligence Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Devlieger, Ines; Mayer, Axel; Rosseel, Yves – Educational and Psychological Measurement, 2016
In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and…
Descriptors: Regression (Statistics), Comparative Analysis, Structural Equation Models, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Aydin, Burak; Leite, Walter L.; Algina, James – Educational and Psychological Measurement, 2016
We investigated methods of including covariates in two-level models for cluster randomized trials to increase power to detect the treatment effect. We compared multilevel models that included either an observed cluster mean or a latent cluster mean as a covariate, as well as the effect of including Level 1 deviation scores in the model. A Monte…
Descriptors: Error of Measurement, Predictor Variables, Randomized Controlled Trials, Experimental Groups
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4