Publication Date

In 2021 | 5 |

Since 2020 | 11 |

Since 2017 (last 5 years) | 32 |

Since 2012 (last 10 years) | 70 |

Since 2002 (last 20 years) | 203 |

Descriptor

Factor Analysis | 575 |

Factor Structure | 214 |

Test Validity | 122 |

Correlation | 115 |

Measures (Individuals) | 77 |

Psychometrics | 72 |

Scores | 69 |

Statistical Analysis | 64 |

Test Reliability | 63 |

Higher Education | 61 |

Comparative Analysis | 59 |

More ▼ |

Source

Educational and Psychological… | 575 |

Author

Publication Type

Education Level

Audience

Practitioners | 1 |

Students | 1 |

Teachers | 1 |

Location

Canada | 10 |

United States | 9 |

Australia | 7 |

Germany | 5 |

Israel | 5 |

United Kingdom | 5 |

Japan | 3 |

South Korea | 3 |

Belgium | 2 |

China | 2 |

Hong Kong | 2 |

More ▼ |

Laws, Policies, & Programs

Assessments and Surveys

What Works Clearinghouse Rating

Park, Sung Eun; Ahn, Soyeon; Zopluoglu, Cengiz – Educational and Psychological Measurement, 2021

This study presents a new approach to synthesizing differential item functioning (DIF) effect size: First, using correlation matrices from each study, we perform a multigroup confirmatory factor analysis (MGCFA) that examines measurement invariance of a test item between two subgroups (i.e., focal and reference groups). Then we synthesize, across…

Descriptors: Item Analysis, Effect Size, Difficulty Level, Monte Carlo Methods

Ferrando, Pere J.; Lorenzo-Seva, Urbano – Educational and Psychological Measurement, 2021

Unit-weight sum scores (UWSSs) are routinely used as estimates of factor scores on the basis of solutions obtained with the nonlinear exploratory factor analysis (EFA) model for ordered-categorical responses. Theoretically, this practice results in a loss of information and accuracy, and is expected to lead to biased estimates. However, the…

Descriptors: Scores, Factor Analysis, Automation, Fidelity

Wind, Stefanie A.; Schumacker, Randall E. – Educational and Psychological Measurement, 2021

Researchers frequently use Rasch models to analyze survey responses because these models provide accurate parameter estimates for items and examinees when there are missing data. However, researchers have not fully considered how missing data affect the accuracy of dimensionality assessment in Rasch analyses such as principal components analysis…

Descriptors: Item Response Theory, Data, Factor Analysis, Accuracy

Montoya, Amanda K.; Edwards, Michael C. – Educational and Psychological Measurement, 2021

Model fit indices are being increasingly recommended and used to select the number of factors in an exploratory factor analysis. Growing evidence suggests that the recommended cutoff values for common model fit indices are not appropriate for use in an exploratory factor analysis context. A particularly prominent problem in scale evaluation is the…

Descriptors: Goodness of Fit, Factor Analysis, Cutting Scores, Correlation

Levy, Roy; Xia, Yan; Green, Samuel B. – Educational and Psychological Measurement, 2021

A number of psychometricians have suggested that parallel analysis (PA) tends to yield more accurate results in determining the number of factors in comparison with other statistical methods. Nevertheless, all too often PA can suggest an incorrect number of factors, particularly in statistically unfavorable conditions (e.g., small sample sizes and…

Descriptors: Bayesian Statistics, Statistical Analysis, Factor Structure, Probability

Beauducel, André; Kersting, Martin – Educational and Psychological Measurement, 2020

We investigated by means of a simulation study how well methods for factor rotation can identify a two-facet simple structure. Samples were generated from orthogonal and oblique two-facet population factor models with 4 (2 factors per facet) to 12 factors (6 factors per facet). Samples drawn from orthogonal populations were submitted to factor…

Descriptors: Factor Structure, Factor Analysis, Sample Size, Intelligence

Shi, Dexin; Lee, Taehun; Fairchild, Amanda J.; Maydeu-Olivares, Alberto – Educational and Psychological Measurement, 2020

This study compares two missing data procedures in the context of ordinal factor analysis models: pairwise deletion (PD; the default setting in Mplus) and multiple imputation (MI). We examine which procedure demonstrates parameter estimates and model fit indices closer to those of complete data. The performance of PD and MI are compared under a…

Descriptors: Factor Analysis, Statistical Analysis, Computation, Goodness of Fit

Finch, W. Holmes – Educational and Psychological Measurement, 2020

Exploratory factor analysis (EFA) is widely used by researchers in the social sciences to characterize the latent structure underlying a set of observed indicator variables. One of the primary issues that must be resolved when conducting an EFA is determination of the number of factors to retain. There exist a large number of statistical tools…

Descriptors: Factor Analysis, Goodness of Fit, Social Sciences, Comparative Analysis

Shi, Dexin; Maydeu-Olivares, Alberto – Educational and Psychological Measurement, 2020

We examined the effect of estimation methods, maximum likelihood (ML), unweighted least squares (ULS), and diagonally weighted least squares (DWLS), on three population SEM (structural equation modeling) fit indices: the root mean square error of approximation (RMSEA), the comparative fit index (CFI), and the standardized root mean square residual…

Descriptors: Structural Equation Models, Computation, Maximum Likelihood Statistics, Least Squares Statistics

Sideridis, Georgios D.; Tsaousis, Ioannis; Alamri, Abeer A. – Educational and Psychological Measurement, 2020

The main thesis of the present study is to use the Bayesian structural equation modeling (BSEM) methodology of establishing approximate measurement invariance (A-MI) using data from a national examination in Saudi Arabia as an alternative to not meeting strong invariance criteria. Instead, we illustrate how to account for the absence of…

Descriptors: Bayesian Statistics, Structural Equation Models, Foreign Countries, Error of Measurement

Goretzko, David; Heumann, Christian; Bühner, Markus – Educational and Psychological Measurement, 2020

Exploratory factor analysis is a statistical method commonly used in psychological research to investigate latent variables and to develop questionnaires. Although such self-report questionnaires are prone to missing values, there is not much literature on this topic with regard to exploratory factor analysis--and especially the process of factor…

Descriptors: Factor Analysis, Data Analysis, Research Methodology, Psychological Studies

Schweizer, Karl; Reiß, Siegbert; Troche, Stefan – Educational and Psychological Measurement, 2019

The article reports three simulation studies conducted to find out whether the effect of a time limit for testing impairs model fit in investigations of structural validity, whether the representation of the assumed source of the effect prevents impairment of model fit and whether it is possible to identify and discriminate this method effect from…

Descriptors: Timed Tests, Testing, Barriers, Testing Problems

Yang, Yanyun; Xia, Yan – Educational and Psychological Measurement, 2019

When item scores are ordered categorical, categorical omega can be computed based on the parameter estimates from a factor analysis model using frequentist estimators such as diagonally weighted least squares. When the sample size is relatively small and thresholds are different across items, using diagonally weighted least squares can yield a…

Descriptors: Scores, Sample Size, Bayesian Statistics, Item Analysis

Xia, Yan; Green, Samuel B.; Xu, Yuning; Thompson, Marilyn S. – Educational and Psychological Measurement, 2019

Past research suggests revised parallel analysis (R-PA) tends to yield relatively accurate results in determining the number of factors in exploratory factor analysis. R-PA can be interpreted as a series of hypothesis tests. At each step in the series, a null hypothesis is tested that an additional factor accounts for zero common variance among…

Descriptors: Effect Size, Factor Analysis, Hypothesis Testing, Psychometrics

Jordan, Pascal; Spiess, Martin – Educational and Psychological Measurement, 2019

Factor loadings and item discrimination parameters play a key role in scale construction. A multitude of heuristics regarding their interpretation are hardwired into practice--for example, neglecting low loadings and assigning items to exactly one scale. We challenge the common sense interpretation of these parameters by providing counterexamples…

Descriptors: Test Construction, Test Items, Item Response Theory, Factor Structure