Publication Date
In 2022 | 0 |
Since 2021 | 9 |
Since 2018 (last 5 years) | 36 |
Since 2013 (last 10 years) | 91 |
Since 2003 (last 20 years) | 165 |
Descriptor
Correlation | 464 |
Factor Analysis | 95 |
Test Validity | 80 |
Statistical Analysis | 74 |
Scores | 65 |
Monte Carlo Methods | 56 |
Higher Education | 55 |
Comparative Analysis | 53 |
Sample Size | 50 |
Computation | 48 |
Factor Structure | 47 |
More ▼ |
Source
Educational and Psychological… | 464 |
Author
Marcoulides, George A. | 11 |
Raykov, Tenko | 11 |
Michael, William B. | 10 |
Zumbo, Bruno D. | 8 |
Algina, James | 7 |
Vegelius, Jan | 6 |
Martin, John D. | 5 |
Rae, Gordon | 5 |
Dunlap, William P. | 4 |
Finch, W. Holmes | 4 |
Omizo, Michael M. | 4 |
More ▼ |
Publication Type
Education Level
Higher Education | 19 |
Elementary Education | 12 |
Postsecondary Education | 12 |
Secondary Education | 10 |
Junior High Schools | 8 |
Middle Schools | 8 |
Grade 8 | 6 |
Grade 4 | 4 |
Grade 5 | 4 |
Grade 6 | 4 |
High Schools | 4 |
More ▼ |
Audience
Researchers | 1 |
Location
Canada | 8 |
United States | 4 |
Taiwan | 3 |
United Kingdom | 3 |
Australia | 2 |
China | 2 |
Hong Kong | 2 |
India | 2 |
Japan | 2 |
Netherlands (Amsterdam) | 2 |
California | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Montoya, Amanda K.; Edwards, Michael C. – Educational and Psychological Measurement, 2021
Model fit indices are being increasingly recommended and used to select the number of factors in an exploratory factor analysis. Growing evidence suggests that the recommended cutoff values for common model fit indices are not appropriate for use in an exploratory factor analysis context. A particularly prominent problem in scale evaluation is the…
Descriptors: Goodness of Fit, Factor Analysis, Cutting Scores, Correlation
Andersson, Gustaf; Yang-Wallentin, Fan – Educational and Psychological Measurement, 2021
Factor score regression has recently received growing interest as an alternative for structural equation modeling. However, many applications are left without guidance because of the focus on normally distributed outcomes in the literature. We perform a simulation study to examine how a selection of factor scoring methods compare when estimating…
Descriptors: Regression (Statistics), Statistical Analysis, Computation, Scoring
Ames, Allison J.; Myers, Aaron J. – Educational and Psychological Measurement, 2021
Contamination of responses due to extreme and midpoint response style can confound the interpretation of scores, threatening the validity of inferences made from survey responses. This study incorporated person-level covariates in the multidimensional item response tree model to explain heterogeneity in response style. We include an empirical…
Descriptors: Response Style (Tests), Item Response Theory, Longitudinal Studies, Adolescents
Park, Sung Eun; Ahn, Soyeon; Zopluoglu, Cengiz – Educational and Psychological Measurement, 2021
This study presents a new approach to synthesizing differential item functioning (DIF) effect size: First, using correlation matrices from each study, we perform a multigroup confirmatory factor analysis (MGCFA) that examines measurement invariance of a test item between two subgroups (i.e., focal and reference groups). Then we synthesize, across…
Descriptors: Item Analysis, Effect Size, Difficulty Level, Monte Carlo Methods
Ferrando, Pere J.; Lorenzo-Seva, Urbano – Educational and Psychological Measurement, 2021
Unit-weight sum scores (UWSSs) are routinely used as estimates of factor scores on the basis of solutions obtained with the nonlinear exploratory factor analysis (EFA) model for ordered-categorical responses. Theoretically, this practice results in a loss of information and accuracy, and is expected to lead to biased estimates. However, the…
Descriptors: Scores, Factor Analysis, Automation, Fidelity
Bezirhan, Ummugul; von Davier, Matthias; Grabovsky, Irina – Educational and Psychological Measurement, 2021
This article presents a new approach to the analysis of how students answer tests and how they allocate resources in terms of time on task and revisiting previously answered questions. Previous research has shown that in high-stakes assessments, most test takers do not end the testing session early, but rather spend all of the time they were…
Descriptors: Response Style (Tests), Accuracy, Reaction Time, Ability
Beauducel, André; Hilger, Norbert – Educational and Psychological Measurement, 2021
Methods for optimal factor rotation of two-facet loading matrices have recently been proposed. However, the problem of the correct number of factors to retain for rotation of two-facet loading matrices has rarely been addressed in the context of exploratory factor analysis. Most previous studies were based on the observation that two-facet loading…
Descriptors: Factor Analysis, Statistical Analysis, Correlation, Models
Cao, Chunhua; Kim, Eun Sook; Chen, Yi-Hsin; Ferron, John – Educational and Psychological Measurement, 2021
This study examined the impact of omitting covariates interaction effect on parameter estimates in multilevel multiple-indicator multiple-cause models as well as the sensitivity of fit indices to model misspecification when the between-level, within-level, or cross-level interaction effect was left out in the models. The parameter estimates…
Descriptors: Goodness of Fit, Hierarchical Linear Modeling, Computation, Models
Gorgun, Guher; Bulut, Okan – Educational and Psychological Measurement, 2021
In low-stakes assessments, some students may not reach the end of the test and leave some items unanswered due to various reasons (e.g., lack of test-taking motivation, poor time management, and test speededness). Not-reached items are often treated as incorrect or not-administered in the scoring process. However, when the proportion of…
Descriptors: Scoring, Test Items, Response Style (Tests), Mathematics Tests
LaVoie, Noelle; Parker, James; Legree, Peter J.; Ardison, Sharon; Kilcullen, Robert N. – Educational and Psychological Measurement, 2020
Automated scoring based on Latent Semantic Analysis (LSA) has been successfully used to score essays and constrained short answer responses. Scoring tests that capture open-ended, short answer responses poses some challenges for machine learning approaches. We used LSA techniques to score short answer responses to the Consequences Test, a measure…
Descriptors: Semantics, Evaluators, Essays, Scoring
Finch, W. Holmes – Educational and Psychological Measurement, 2020
Exploratory factor analysis (EFA) is widely used by researchers in the social sciences to characterize the latent structure underlying a set of observed indicator variables. One of the primary issues that must be resolved when conducting an EFA is determination of the number of factors to retain. There exist a large number of statistical tools…
Descriptors: Factor Analysis, Goodness of Fit, Social Sciences, Comparative Analysis
Raykov, Tenko; Al-Qataee, Abdullah A.; Dimitrov, Dimiter M. – Educational and Psychological Measurement, 2020
A procedure for evaluation of validity related coefficients and their differences is discussed, which is applicable when one or more frequently used assumptions in empirical educational, behavioral and social research are violated. The method is developed within the framework of the latent variable modeling methodology and accomplishes point and…
Descriptors: Validity, Evaluation Methods, Social Science Research, Correlation
Hayes, Timothy; Usami, Satoshi – Educational and Psychological Measurement, 2020
Recently, quantitative researchers have shown increased interest in two-step factor score regression (FSR) approaches to structural model estimation. A particularly promising approach proposed by Croon involves first extracting factor scores for each latent factor in a larger model, then correcting the variance-covariance matrix of the factor…
Descriptors: Regression (Statistics), Structural Equation Models, Statistical Bias, Correlation
McGrath, Kathleen V.; Leighton, Elizabeth A.; Ene, Mihaela; DiStefano, Christine; Monrad, Diane M. – Educational and Psychological Measurement, 2020
Survey research frequently involves the collection of data from multiple informants. Results, however, are usually analyzed by informant group, potentially ignoring important relationships across groups. When the same construct(s) are measured, integrative data analysis (IDA) allows pooling of data from multiple sources into one data set to…
Descriptors: Educational Environment, Meta Analysis, Student Attitudes, Teacher Attitudes
Using Differential Item Functioning to Test for Interrater Reliability in Constructed Response Items
Walker, Cindy M.; Göçer Sahin, Sakine – Educational and Psychological Measurement, 2020
The purpose of this study was to investigate a new way of evaluating interrater reliability that can allow one to determine if two raters differ with respect to their rating on a polytomous rating scale or constructed response item. Specifically, differential item functioning (DIF) analyses were used to assess interrater reliability and compared…
Descriptors: Test Bias, Interrater Reliability, Responses, Correlation