NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 21 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Shi, Dexin; Lee, Taehun; Fairchild, Amanda J.; Maydeu-Olivares, Alberto – Educational and Psychological Measurement, 2020
This study compares two missing data procedures in the context of ordinal factor analysis models: pairwise deletion (PD; the default setting in Mplus) and multiple imputation (MI). We examine which procedure demonstrates parameter estimates and model fit indices closer to those of complete data. The performance of PD and MI are compared under a…
Descriptors: Factor Analysis, Statistical Analysis, Computation, Goodness of Fit
Shi, Dexin; Maydeu-Olivares, Alberto – Educational and Psychological Measurement, 2020
We examined the effect of estimation methods, maximum likelihood (ML), unweighted least squares (ULS), and diagonally weighted least squares (DWLS), on three population SEM (structural equation modeling) fit indices: the root mean square error of approximation (RMSEA), the comparative fit index (CFI), and the standardized root mean square residual…
Descriptors: Structural Equation Models, Computation, Maximum Likelihood Statistics, Least Squares Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrando, Pere J.; Lorenzo-Seva, Urbano – Educational and Psychological Measurement, 2019
Measures initially designed to be single-trait often yield data that are compatible with both an essentially unidimensional factor-analysis (FA) solution and a correlated-factors solution. For these cases, this article proposes an approach aimed at providing information for deciding which of the two solutions is the most appropriate and useful.…
Descriptors: Factor Analysis, Computation, Reliability, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
DiStefano, Christine; McDaniel, Heather L.; Zhang, Liyun; Shi, Dexin; Jiang, Zhehan – Educational and Psychological Measurement, 2019
A simulation study was conducted to investigate the model size effect when confirmatory factor analysis (CFA) models include many ordinal items. CFA models including between 15 and 120 ordinal items were analyzed with mean- and variance-adjusted weighted least squares to determine how varying sample size, number of ordered categories, and…
Descriptors: Factor Analysis, Effect Size, Data, Sample Size
Koziol, Natalie A.; Bovaird, James A. – Educational and Psychological Measurement, 2018
Evaluations of measurement invariance provide essential construct validity evidence--a prerequisite for seeking meaning in psychological and educational research and ensuring fair testing procedures in high-stakes settings. However, the quality of such evidence is partly dependent on the validity of the resulting statistical conclusions. Type I or…
Descriptors: Computation, Tests, Error of Measurement, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Green, Samuel; Xu, Yuning; Thompson, Marilyn S. – Educational and Psychological Measurement, 2018
Parallel analysis (PA) assesses the number of factors in exploratory factor analysis. Traditionally PA compares the eigenvalues for a sample correlation matrix with the eigenvalues for correlation matrices for 100 comparison datasets generated such that the variables are independent, but this approach uses the wrong reference distribution. The…
Descriptors: Factor Analysis, Accuracy, Statistical Distributions, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Paek, Insu; Cui, Mengyao; Öztürk Gübes, Nese; Yang, Yanyun – Educational and Psychological Measurement, 2018
The purpose of this article is twofold. The first is to provide evaluative information on the recovery of model parameters and their standard errors for the two-parameter item response theory (IRT) model using different estimation methods by Mplus. The second is to provide easily accessible information for practitioners, instructors, and students…
Descriptors: Item Response Theory, Computation, Factor Analysis, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Lance, Charles E.; Fan, Yi – Educational and Psychological Measurement, 2016
We compared six different analytic models for multitrait-multimethod (MTMM) data in terms of convergence, admissibility, and model fit to 258 samples of previously reported data. Two well-known models, the correlated trait-correlated method (CTCM) and the correlated trait-correlated uniqueness (CTCU) models, were fit for reference purposes in…
Descriptors: Multitrait Multimethod Techniques, Factor Analysis, Models, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Seo, Dong Gi; Weiss, David J. – Educational and Psychological Measurement, 2015
Most computerized adaptive tests (CATs) have been studied using the framework of unidimensional item response theory. However, many psychological variables are multidimensional and might benefit from using a multidimensional approach to CATs. This study investigated the accuracy, fidelity, and efficiency of a fully multidimensional CAT algorithm…
Descriptors: Computer Assisted Testing, Adaptive Testing, Accuracy, Fidelity
Peer reviewed Peer reviewed
Direct linkDirect link
Reckase, Mark D.; Xu, Jing-Ru – Educational and Psychological Measurement, 2015
How to compute and report subscores for a test that was originally designed for reporting scores on a unidimensional scale has been a topic of interest in recent years. In the research reported here, we describe an application of multidimensional item response theory to identify a subscore structure in a test designed for reporting results using a…
Descriptors: English, Language Skills, English Language Learners, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Wetzel, Eunike; Xu, Xueli; von Davier, Matthias – Educational and Psychological Measurement, 2015
In large-scale educational surveys, a latent regression model is used to compensate for the shortage of cognitive information. Conventionally, the covariates in the latent regression model are principal components extracted from background data. This operational method has several important disadvantages, such as the handling of missing data and…
Descriptors: Surveys, Regression (Statistics), Models, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Can, Seda; van de Schoot, Rens; Hox, Joop – Educational and Psychological Measurement, 2015
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation…
Descriptors: Factor Analysis, Comparative Analysis, Maximum Likelihood Statistics, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Reise, Steven; Moore, Tyler; Maydeu-Olivares, Alberto – Educational and Psychological Measurement, 2011
Reise, Cook, and Moore proposed a "comparison modeling" approach to assess the distortion in item parameter estimates when a unidimensional item response theory (IRT) model is imposed on multidimensional data. Central to their approach is the comparison of item slope parameter estimates from a unidimensional IRT model (a restricted model), with…
Descriptors: Item Response Theory, Models, Computation, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Sass, Daniel A. – Educational and Psychological Measurement, 2010
Exploratory factor analysis (EFA) is commonly employed to evaluate the factor structure of measures with dichotomously scored items. Generally, only the estimated factor loadings are provided with no reference to significance tests, confidence intervals, and/or estimated factor loading standard errors. This simulation study assessed factor loading…
Descriptors: Intervals, Simulation, Factor Structure, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Yoo, Jin Eun – Educational and Psychological Measurement, 2009
This Monte Carlo study investigates the beneficiary effect of including auxiliary variables during estimation of confirmatory factor analysis models with multiple imputation. Specifically, it examines the influence of sample size, missing rates, missingness mechanism combinations, missingness types (linear or convex), and the absence or presence…
Descriptors: Monte Carlo Methods, Research Methodology, Test Validity, Factor Analysis
Previous Page | Next Page »
Pages: 1  |  2