NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Olvera Astivia, Oscar Lorenzo; Kroc, Edward; Zumbo, Bruno D. – Educational and Psychological Measurement, 2020
Simulations concerning the distributional assumptions of coefficient alpha are contradictory. To provide a more principled theoretical framework, this article relies on the Fréchet-Hoeffding bounds, in order to showcase that the distribution of the items play a role on the estimation of correlations and covariances. More specifically, these bounds…
Descriptors: Test Items, Test Reliability, Computation, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Zumbo, Bruno D.; Kroc, Edward – Educational and Psychological Measurement, 2019
Chalmers recently published a critique of the use of ordinal a[alpha] proposed in Zumbo et al. as a measure of test reliability in certain research settings. In this response, we take up the task of refuting Chalmers' critique. We identify three broad misconceptions that characterize Chalmers' criticisms: (1) confusing assumptions with…
Descriptors: Test Reliability, Statistical Analysis, Misconceptions, Mathematical Models
Peer reviewed Peer reviewed
Direct linkDirect link
Olvera Astivia, Oscar L.; Zumbo, Bruno D. – Educational and Psychological Measurement, 2015
To further understand the properties of data-generation algorithms for multivariate, nonnormal data, two Monte Carlo simulation studies comparing the Vale and Maurelli method and the Headrick fifth-order polynomial method were implemented. Combinations of skewness and kurtosis found in four published articles were run and attention was…
Descriptors: Data, Simulation, Monte Carlo Methods, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Hidalgo, Mª Dolores; Gómez-Benito, Juana; Zumbo, Bruno D. – Educational and Psychological Measurement, 2014
The authors analyze the effectiveness of the R[superscript 2] and delta log odds ratio effect size measures when using logistic regression analysis to detect differential item functioning (DIF) in dichotomous items. A simulation study was carried out, and the Type I error rate and power estimates under conditions in which only statistical testing…
Descriptors: Regression (Statistics), Test Bias, Effect Size, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Gómez-Benito, Juana; Hidalgo, Maria Dolores; Zumbo, Bruno D. – Educational and Psychological Measurement, 2013
The objective of this article was to find an optimal decision rule for identifying polytomous items with large or moderate amounts of differential functioning. The effectiveness of combining statistical tests with effect size measures was assessed using logistic discriminant function analysis and two effect size measures: R[superscript 2] and…
Descriptors: Item Analysis, Test Items, Effect Size, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Shear, Benjamin R.; Zumbo, Bruno D. – Educational and Psychological Measurement, 2013
Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…
Descriptors: Error of Measurement, Multiple Regression Analysis, Data Analysis, Computer Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yan; Zumbo, Bruno D. – Educational and Psychological Measurement, 2012
There is a lack of research on the effects of outliers on the decisions about the number of factors to retain in an exploratory factor analysis, especially for outliers arising from unintended and unknowingly included subpopulations. The purpose of the present research was to investigate how outliers from an unintended and unknowingly included…
Descriptors: Factor Analysis, Factor Structure, Evaluation Research, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Thomas, D. Roland; Zumbo, Bruno D. – Educational and Psychological Measurement, 2012
There is such doubt in research practice about the reliability of difference scores that granting agencies, journal editors, reviewers, and committees of graduate students' theses have been known to deplore their use. This most maligned index can be used in studies of change, growth, or perhaps discrepancy between two measures taken on the same…
Descriptors: Statistical Analysis, Reliability, Scores, Change
Peer reviewed Peer reviewed
Direct linkDirect link
Zimmerman, Donald W.; Zumbo, Bruno D. – Educational and Psychological Measurement, 2005
Educational and psychological testing textbooks typically warn of the inappropriateness of performing arithmetic operations and statistical analysis on percentiles instead of raw scores. This seems inconsistent with the well-established finding that transforming scores to ranks and using nonparametric methods often improves the validity and power…
Descriptors: Statistical Analysis, Psychological Testing, Raw Scores, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Rupp, Andre A.; Zumbo, Bruno D. – Educational and Psychological Measurement, 2004
Based on seminal work by Lord and Hambleton, Swaminathan, and Rogers, this article is an analytical, graphical, and conceptual reminder that item response theory (IRT) parameter invariance only holds for perfect model fit in multiple populations or across multiple conditions and is thus an ideal state. In practice, one attempts to quantify the…
Descriptors: Correlation, Item Response Theory, Statistical Analysis, Evaluation Methods