NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 20 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Tenko Raykov; Christine DiStefano; Lisa Calvocoressi – Educational and Psychological Measurement, 2024
This note demonstrates that the widely used Bayesian Information Criterion (BIC) need not be generally viewed as a routinely dependable index for model selection when the bifactor and second-order factor models are examined as rival means for data description and explanation. To this end, we use an empirically relevant setting with…
Descriptors: Bayesian Statistics, Models, Decision Making, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Stephanie M. Bell; R. Philip Chalmers; David B. Flora – Educational and Psychological Measurement, 2024
Coefficient omega indices are model-based composite reliability estimates that have become increasingly popular. A coefficient omega index estimates how reliably an observed composite score measures a target construct as represented by a factor in a factor-analysis model; as such, the accuracy of omega estimates is likely to depend on correct…
Descriptors: Influences, Models, Measurement Techniques, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Ulrich Schroeders; Florian Scharf; Gabriel Olaru – Educational and Psychological Measurement, 2024
Metaheuristics are optimization algorithms that efficiently solve a variety of complex combinatorial problems. In psychological research, metaheuristics have been applied in short-scale construction and model specification search. In the present study, we propose a bee swarm optimization (BSO) algorithm to explore the structure underlying a…
Descriptors: Structural Equation Models, Heuristics, Algorithms, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
D'Urso, E. Damiano; Tijmstra, Jesper; Vermunt, Jeroen K.; De Roover, Kim – Educational and Psychological Measurement, 2023
Assessing the measurement model (MM) of self-report scales is crucial to obtain valid measurements of individuals' latent psychological constructs. This entails evaluating the number of measured constructs and determining which construct is measured by which item. Exploratory factor analysis (EFA) is the most-used method to evaluate these…
Descriptors: Factor Analysis, Measurement Techniques, Self Evaluation (Individuals), Psychological Patterns
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Anthony, James C.; Menold, Natalja – Educational and Psychological Measurement, 2023
The population relationship between coefficient alpha and scale reliability is studied in the widely used setting of unidimensional multicomponent measuring instruments. It is demonstrated that for any set of component loadings on the common factor, regardless of the extent of their inequality, the discrepancy between alpha and reliability can be…
Descriptors: Correlation, Evaluation Research, Reliability, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Sideridis, Georgios; Tsaousis, Ioannis; Ghamdi, Hanan – Educational and Psychological Measurement, 2023
The purpose of the present study was to provide the means to evaluate the "interval-scaling" assumption that governs the use of parametric statistics and continuous data estimators in self-report instruments that utilize Likert-type scaling. Using simulated and real data, the methodology to test for this important assumption is evaluated…
Descriptors: Intervals, Scaling, Computer Software, Likert Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Akaeze, Hope O.; Lawrence, Frank R.; Wu, Jamie Heng-Chieh – Educational and Psychological Measurement, 2023
Multidimensionality and hierarchical data structure are common in assessment data. These design features, if not accounted for, can threaten the validity of the results and inferences generated from factor analysis, a method frequently employed to assess test dimensionality. In this article, we describe and demonstrate the application of the…
Descriptors: Measures (Individuals), Multidimensional Scaling, Tests, Hierarchical Linear Modeling
Peer reviewed Peer reviewed
Direct linkDirect link
Cox, Kyle; Kelcey, Benjamin – Educational and Psychological Measurement, 2023
Multilevel structural equation models (MSEMs) are well suited for educational research because they accommodate complex systems involving latent variables in multilevel settings. Estimation using Croon's bias-corrected factor score (BCFS) path estimation has recently been extended to MSEMs and demonstrated promise with limited sample sizes. This…
Descriptors: Structural Equation Models, Educational Research, Hierarchical Linear Modeling, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Xiaoling; Cao, Pei; Lai, Xinzhen; Wen, Jianbing; Yang, Yanyun – Educational and Psychological Measurement, 2023
Percentage of uncontaminated correlations (PUC), explained common variance (ECV), and omega hierarchical ([omega]H) have been used to assess the degree to which a scale is essentially unidimensional and to predict structural coefficient bias when a unidimensional measurement model is fit to multidimensional data. The usefulness of these indices…
Descriptors: Correlation, Measurement Techniques, Prediction, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Kreitchmann, Rodrigo S.; Sorrel, Miguel A.; Abad, Francisco J. – Educational and Psychological Measurement, 2023
Multidimensional forced-choice (FC) questionnaires have been consistently found to reduce the effects of socially desirable responding and faking in noncognitive assessments. Although FC has been considered problematic for providing ipsative scores under the classical test theory, item response theory (IRT) models enable the estimation of…
Descriptors: Measurement Techniques, Questionnaires, Social Desirability, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Lin, Yin; Brown, Anna; Williams, Paul – Educational and Psychological Measurement, 2023
Several forced-choice (FC) computerized adaptive tests (CATs) have emerged in the field of organizational psychology, all of them employing ideal-point items. However, despite most items developed historically follow dominance response models, research on FC CAT using dominance items is limited. Existing research is heavily dominated by…
Descriptors: Measurement Techniques, Computer Assisted Testing, Adaptive Testing, Industrial Psychology
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Pusic, Martin – Educational and Psychological Measurement, 2023
This note is concerned with evaluation of location parameters for polytomous items in multiple-component measuring instruments. A point and interval estimation procedure for these parameters is outlined that is developed within the framework of latent variable modeling. The method permits educational, behavioral, biomedical, and marketing…
Descriptors: Item Analysis, Measurement Techniques, Computer Software, Intervals
Peer reviewed Peer reviewed
Direct linkDirect link
Bogaert, Jasper; Loh, Wen Wei; Rosseel, Yves – Educational and Psychological Measurement, 2023
Factor score regression (FSR) is widely used as a convenient alternative to traditional structural equation modeling (SEM) for assessing structural relations between latent variables. But when latent variables are simply replaced by factor scores, biases in the structural parameter estimates often have to be corrected, due to the measurement error…
Descriptors: Factor Analysis, Regression (Statistics), Structural Equation Models, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Jin, Kuan-Yu; Eckes, Thomas – Educational and Psychological Measurement, 2022
Performance assessments heavily rely on human ratings. These ratings are typically subject to various forms of error and bias, threatening the assessment outcomes' validity and fairness. Differential rater functioning (DRF) is a special kind of threat to fairness manifesting itself in unwanted interactions between raters and performance- or…
Descriptors: Performance Based Assessment, Rating Scales, Test Bias, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Sooyong; Han, Suhwa; Choi, Seung W. – Educational and Psychological Measurement, 2022
Response data containing an excessive number of zeros are referred to as zero-inflated data. When differential item functioning (DIF) detection is of interest, zero-inflation can attenuate DIF effects in the total sample and lead to underdetection of DIF items. The current study presents a DIF detection procedure for response data with excess…
Descriptors: Test Bias, Monte Carlo Methods, Simulation, Models
Previous Page | Next Page »
Pages: 1  |  2